Home>Article>Operation and Maintenance> What are the implementation methods of nginx load balancing?
In the server cluster, Nginx plays the role of a proxy server (ie, reverse proxy). In order to avoid excessive pressure on a single server, it forwards requests from users to different servers. Load balancing is used to select a server from the backend server list defined by the "upstream" module to accept user requests.
Several common methods of load balancing
1. Polling (default)
Each request They are assigned to different backend servers one by one in chronological order. If the backend server goes down, it can be automatically eliminated.
upstream backserver { server 192.168.0.14; server 192.168.0.15; }
2, weight
Specify the polling probability, weight is proportional to the access ratio, and is used when the back-end server performance is uneven.
upstream backserver { server 192.168.0.14 weight=3; server 192.168.0.15 weight=7; }
The higher the weight, the greater the probability of being accessed. As in the above example, they are 30% and 70% respectively.
3. ip_hash
One problem with the above method is that in the load balancing system, if the user logs in on a certain server, then when the user makes a second request, because We are a load balancing system. Each request will be redirected to a certain server cluster. If a user who has logged in to a server is redirected to another server, his login information will be lost. This is obviously inappropriate.
We can use the ip_hash instruction to solve this problem. If the customer has already visited a certain server, when the user visits again, the request will be automatically located to the server through the hash algorithm.
Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server, which can solve the session problem.
upstream backserver { ip_hash; server 192.168.0.14:88; server 192.168.0.15:80; }
4. fair (third party)
Requests are allocated according to the response time of the backend server, and those with short response times are allocated first.
upstream backserver { server server1; server server2; fair; }
5. url_hash (third party)
Distribute requests according to the hash result of the accessed URL, so that each URL is directed to the same back-end server. It is more effective when the back-end server is cached. .
upstream backserver { server squid1:3128; server squid2:3128; hash $request_uri; hash_method crc32; }
The status of each device is set to:
1), down means that the server in front of the single is temporarily not participating in the load
2), the weight defaults to 1. The weight exceeds The larger it is, the greater the weight of the load.
3), max_fails: The number of allowed request failures is 1 by default. When the maximum number is exceeded, the error defined by the proxy_next_upstream module is returned
4), fail_timeout: After max_fails failures, the time.
5), backup: When all other non-backup machines are down or busy, request the backup machine. So this machine will have the least pressure.
Configuration example:
#user nobody; worker_processes 4; events { # 最大并发数 worker_connections 1024; } http{ # 待选服务器列表 upstream myproject{ # ip_hash指令,将同一用户引入同一服务器。 ip_hash; server 125.219.42.4 fail_timeout=60s; server 172.31.2.183; } server{ # 监听端口 listen 80; # 根目录下 location / { # 选择哪个服务器列表 proxy_pass http://myproject; } } }
The above is the detailed content of What are the implementation methods of nginx load balancing?. For more information, please follow other related articles on the PHP Chinese website!