Home >Operation and Maintenance >Nginx >How nginx achieves high concurrency

How nginx achieves high concurrency

(*-*)浩
(*-*)浩Original
2019-06-04 17:34:396791browse

To put it simply, it is asynchronous, non-blocking, using epoll and a lot of underlying code optimization.

In a little more detail, it is the design of nginx’s special process model and event model.

How nginx achieves high concurrency

Video Course Recommendation →: "Concurrency Solution for Tens of Millions of Data (Theory and Practice)"

Process model

nginx adopts a master process and multiple worker processes.

The master process is mainly responsible for collecting and distributing requests. When a request comes, the master starts a worker process to handle the request.

The master process is also responsible for monitoring the status of the woker to ensure high reliability

The woker process is generally set to match the number of CPU cores. The woker process of nginx is different from apache. The apche process can only handle one request at the same time, so it will open many processes, hundreds or even thousands. The number of requests that nginx's woker process can handle at the same time is only limited by memory, so it can handle multiple requests.

Event model

nginx is asynchronous and non-blocking.

Every time a request comes in, there will be a worker process to process it. But it’s not the entire process. To what extent? Process where blocking may occur, such as forwarding the request to the upstream (backend) server and waiting for the request to return. Then, the processing worker will not wait so stupidly. After sending the request, he will register an event: "If the upstream returns, tell me and I will continue." So he went to rest. At this time, if another request comes in, he can quickly handle it in this way. Once the upstream server returns, this event will be triggered, the worker will take over, and the request will continue to go down.

The working nature of the web server determines that most of the life of each request is in network transmission. In fact, not much time is spent on the server machine. This is the secret to solving high concurrency with just a few processes.

IO multiplexing model epoll

epoll(), the kernel maintains a linked list, epoll_wait directly checks whether the linked list is empty to know whether a file descriptor is ready . The kernel implements epoll based on the callback function established with the device driver on each sockfd. Then, when an event on a sockfd occurs, its corresponding callback function will be called to add this sockfd to the linked list, and other "idle" states will not.

select(), the kernel uses the rotation training method to check whether there is fd ready. The sockfd is saved in an array-like data structure fd_set, the key is fd, and the value is 0 or 1.

poll()

[Summary]: The biggest advantage of epoll compared with select is that the efficiency will not decrease as the number of sockfd increases.

For more Nginx related technical articles, please visit the Nginx usage tutorial column to learn!

The above is the detailed content of How nginx achieves high concurrency. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn