13 Mar, 2024
Nginx 101: The Architecture That Powers Modern Websites

Web server software is a crucial piece of software in today’s world. From traditional Apache servers to modern, sophisticated ones, a few major web server software have secured their place on the World Wide Web. Nginx is one of the leading web servers today.
Nginx, pronounced as ‘engine X’, is a high-performance, lightweight, highly concurrent web server that can also be used as a reverse proxy, load balancer and HTTP cache. The nginx was written by a Russian developer named Igor Sysoev to address the problem known as the C10k problem, which aimed to optimize network sockets to handle a larger number of clients at the same time. (handling thousands of concurrent connections is a different problem from handling many requests per second). This problem was created by Daniel Kegel, a prominent software engineer, to address the increasing demands of web services in 1990. When server hardware, operating systems, and network resources become major constraints for website growth, many developers worldwide look for ways to optimize web servers to handle a large number of connections. Nginx has proven to be one of the most successful solutions to this problem. Development began in 2002, and Igor released the first public version of Nginx in 2004 after two years of active development.
Nginx is free and open-source server software released under the terms of the 2-clause BSD license. In 2011, a company named Nginx Inc. was founded by Igor and Maxim Konovalov to provide commercial support for Nginx and NGINX Plus, which is paid software with advanced features. Nginx Inc. was acquired by F5 Inc. in March 2019. By 2024, Nginx secured first place in the server competition, surpassing Apache.

The Nginx codebase is entirely original and was written from scratch in the C language. Nginx has its own libraries and, with its standard modules, does not use much beyond the system’s C library, except for zlib, PCRE, and OpenSSL. It’s worth mentioning that Nginx’s compatibility with the Windows environment is quite poor. Certain limitations of Nginx and the Windows kernel architecture led to poor performance, no caching, and no bandwidth policies.
Features of Nginx
The first version of Nginx was primarily developed to serve static content and was deployed alongside Apache. Throughout its development, Nginx has added integration with other features.
One of the main features of Nginx is its ability to handle more than 10,000 simultaneous connections with a low memory footprint (approximately 2.5 MB per inactive 10,000 HTTP keep-alive connections). In addition to that, Nginx provides features primarily in three areas. These are,
- HTTP server features
- Mail proxy server features.
- Generic TCP/UDP proxy server features
Nginx primarily acts as a reverse proxy, load balancer, and HTTP cache under HTTP server features. It also supports file indexing, and autoindexing, and serves dynamic content using FastCGI and SCGI handlers for scripts. Nginx is capable of handling HTTP/2 as well as HTTP/3 traffic. It supports HTTPS via SSL and TLS SNI. Other main features supported by nginx include URL redirection, URL rewriting, keep-alive and pipelined connections, redirections for 3xx and 5xx error codes, as well as FLV and MP4 streaming.
Nginx can be used as a mail proxy server to redirect users to IMAP and POP3 servers using an external HTTP authentication server.
In addition to serving as a specific HTTP server, Nginx can also act as a generic TCP/UDP proxy server with full SSL, TLS SNI support, and load balancing. It also supports access control based on client address.
Architecture of the Nginx

Apache, the web server that still largely dominates the Internet today, has its roots in the early 1990s, when architecture matched then-existing operating systems and hardware. However, by the start of 2000, it was obvious that the standalone web server architecture did not scale well with the non-linear scalability of websites and web services. Apache was architected to spawn a copy of itself for each new connection, which was not suitable for growing web services. Eventually, Apache becomes a general-purpose web server with rich collections of third-party extensions and tools. But the problem remains the same: Apache is less scalable at that time because of increased CPU and memory overhead per connection.
Nginx was written with a different architecture in mind, one which is much more suitable for non-linear scalability in both simultaneous connections and requests for seconds. The result is a modular event-driven asynchronous single-threaded non-blocking architecture , which became the foundation of the nginx core.
Unlike traditional multi-threaded/multi-processed web servers, which span separate processes or threads for each new connection with run-time overhead, nginx uses a small number of worker processes to handle multiple concurrent connections. This worker process model is scaled efficiently in multi-core systems by attaching separate worker processes per core, allowing full utilization of multi-core architecture.
Nginx works under the master-worker architecture, which consists of a single master process that manages one or more worker processes.

Three main components need to handle each connection that comes to the server, which are:
- Worker processes: responsible for handling incoming client connections and processing requests. Each process is single-threaded and non-blocking, and they communicate via shared memory and shared cached data.
- Event loop: Each worker process runs an event loop that listens for events and schedules tasks accordingly. This is the most complicated part of the Nginx core. event loop heavily relies on asynchronous task handling. The key principle of the event loop is to be as non-blocking as possible (unless there is no more disk storage performance).
- Event notification mechanism — uses platform-specific notification mechanism to monitor file descriptors for new events.
Because Nginx does not fork processes or threads per connection, memory usage is very conservative and extremely efficient. It also conserves CPU cycles by not intervening in the ongoing cycle of creating and destroying processes and threads.
In addition to worker processes and master processes, there are a few small processes for managing the caching process of the server, which are the Cache loader process and the Cache manager process.
- The cache loader process → runs at startup to load disk-based cache into the memory.
- The cache manager process → runs periodically and prunes entries from the disk to keep them at the configured size.
Here is an image of a terminal screenshot showing the running master and worker processes of the nginx server:

Nginx process roles
1. Master process
runs as the root user, which is responsible for following tasks,
- Reading and validating configurations.
- Creating, binding and closing sockets
- starting, terminating, and maintaining the configured number of worker processes.
- Reconfiguring without service interruption.
- Controlling non-stop binary upgrades.
- Reopening log files.
- Compiling embedded Perl scripts.
2. Worker process
The worker process is responsible for accepting, handling, and processing connections from clients. The connection handling process involves five main phases: Accept, Read, Process, Write, and Close. Worker processes are responsible for everything except handling from the master process and cache management.
The following steps generally occur during connection handling:
- The worker waits for events on the listening and connection sockets.
- Events occur on the sockets, and the worker handles them.
- An event on the socket indicates that a client has initiated a new connection.
- An event on the connection indicates that a client has made a new request, and the worker process responds promptly.
The typical HTTP request processing cycle follows these steps:
- The client sends an HTTP request.
- The nginx core selects the appropriate phase handler based on the configured location matching the request.
- If configured, a load balancer selects an upstream server for proxying.
- The phase handler processes the request and passes each output buffer to the first filter.
- The first filter passes the output to the second filter, and so on.
- The final response is then sent to the client.
The modular architecture of Nginx allows developers to extend the web server's features without modifying its core. Nginx modules come in different flavors, namely core modules, event modules, phase modules, protocols, variable handlers, filters, upstream, and load balancers.
Also, nginx provides a robust, scalable configuration system, as always required by web servers. With inspiration from Igor’s experience with Apache, the nginx configuration was designed to simplify day-to-day operations and provide an easy means of expansion.
The configuration files are initially read and verified by the master process, and a compiled read-only configuration file is available to the worker processes.
Lessons learned:
When Igor Sysoev began developing Nginx, most of the software that powered the Internet already existed. The architecture of this software typically followed the definitions of legacy server and network hardware, operating systems, and the old Internet architecture in general. However, this didn’t deter Igor from believing that he could improve things in the area of web servers. So, while the first lesson might seem obvious, it is this: there is always room for improvement.
Conclusions:
In conclusion, Nginx is a powerful and versatile web server that offers robust features for handling HTTP request processing. Its efficient core capabilities, including reverse proxying, load balancing, and HTTP caching, make it a valuable tool for web administrators seeking high-performance and reliable web server operations. Understanding and effectively utilizing Nginx’s capabilities is crucial for maintaining a responsive and scalable web infrastructure.