How to configure load balancing in nginx
Load balancing is something we need to do for our high-traffic website. Now I will introduce to you the load balancing configuration method on the Nginx server. I hope it will be helpful to students in need. oh.
Load Balancing
First let’s briefly understand what load balancing is. If you understand it literally, it can explain that N servers share the load equally. It will not A situation where a certain server is down due to high load and a certain server is idle. Then the premise of load balancing is that it can be achieved by multiple servers, that is, more than two servers are enough.
Test environment
Since there is no server, this test directly hosts the specified domain name, and then installs three CentOS in VMware.
Test domain name: a.com
A server IP: 192.168.5.149 (main)
B server IP: 192.168.5.27
C server IP :192.168.5.126
Deployment idea
A server is used as the main server, the domain name is directly resolved to the A server (192.168.5.149), and the A server is load balanced to the B server ( 192.168.5.27) and C server (192.168.5.126).
Domain name resolution
Since it is not a real environment, the domain name is just a.com for testing, so the resolution of a.com can only be done in hosts File settings.
Open: C:WindowsSystem32driversetchosts
Add
192.168.5.149 at the end a.com
Save and exit, then start the command mode and ping to see if Set up successfully
From the screenshot, a.com has been successfully resolved to 192.168.5.149IP
A server nginx.conf settings
Open nginx.conf , the file location is in the conf directory of the nginx installation directory.
Add the following code to the http section
upstream a.com { server 192.168.5.126:80; server 192.168.5.27:80; } server{ listen 80; server_name a.com; location / { proxy_pass http://a.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }
Save and restart nginx
B and C server nginx.conf settings
Open nginx.confi and add the following code to the http section
server{ listen 80; server_name a.com; index index.html; root /data0/htdocs/www; }
Save and restart nginx
Test
When accessing a.com, in order to distinguish which server to turn to for processing, I write an index with different content under servers B and C respectively. .html file to distinguish.
Open the browser to access a.com. Refresh and you will find that all requests are allocated by the main server (192.168.5.149) to server B (192.168.5.27) and server C (192.168.5.126). Achieved load balancing effect.
What if one of the servers goes down?
When a certain server goes down, will access be affected?
Let’s take a look at the example first. Based on the above example, assume that the machine C server 192.168.5.126 is down (since it is impossible to simulate the downtime, so I shut down the C server) and then visit it again.
Access results:
We found that although the C server (192.168.5.126) was down, it did not affect website access. In this way, you won't have to worry about dragging down the entire site because a certain machine is down in load balancing mode.
What if b.com also needs to set up load balancing?
It's very simple, the same as a.com settings. As follows:
Assume that the main server IP of b.com is 192.168.5.149, and the load is balanced to 192.168.5.150 and 192.168.5.151 machines.
Now resolve the domain name b.com to 192.168.5.149 on IP.
Add the following code to nginx.conf of the main server (192.168.5.149):
upstream b.com { server 192.168.5.150:80; server 192.168.5.151:80; } server{ listen 80; server_name b.com; location / { proxy_pass http://b.com; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }
Save and restart nginx
at 192.168.5.150 and 192.168 .Set nginx on the 5.151 machine, open nginx.conf and add the following code at the end:
server{ listen 80; server_name b.com; index index.html; root /data0/htdocs/www; }
Save and restart nginx
After completing the next steps, you can implement the load balancing configuration of b.com.
Can’t the main server provide services?
In the above examples, we have applied the load balancing of the main server to other servers, so can the main server itself be added to the server list, so that it will not be wasted using a server purely as a forwarding function, and Yes, they are also involved in providing services.
Such as the above case of three servers:
A server IP: 192.168.5.149 (main)
B server IP: 192.168.5.27
C server IP: 192.168.5.126
We resolve the domain name to server A, and then forward it to server B and server C from server A. Then server A only performs a forwarding function. Now we let server A also provide site services.
Let’s analyze it first. If you add the main server to upstream, the following two situations may occur:
1. The main server is forwarded to other IPs, and other IP servers are normal. Processing;
2. The main server forwards it to its own IP, and then goes to the main server to allocate IP. If it is always allocated to the local machine, it will cause an infinite loop.
How to solve this problem? Because port 80 has been used to monitor load balancing processing, port 80 can no longer be used on this server to process access requests for a.com, and a new one must be used. So we added the following code to the main server's nginx.conf:
server{ listen 8080; server_name a.com; index index.html; root /data0/htdocs/www; }
Restart nginx and enter a.com:8080 in the browser to see if it can be accessed. The result can be accessed normally
Since it can be accessed normally, we can add the main server to upstream, but the port needs to be changed, as shown in the following code:
upstream a.com { server 192.168.5.126:80; server 192.168.5.27:80; server 127.0.0.1:8080; }
由于这里可以添加主服务器IP192.168.5.149或者127.0.0.1均可以,都表示访问自己。
重启Nginx,然后再来访问a.com看看会不会分配到主服务器上。
主服务器也能正常加入服务了。
最后
一、负载均衡不是nginx独有,著名鼎鼎的apache也有,但性能可能不如nginx。
二、多台服务器提供服务,但域名只解析到主服务器,而真正的服务器IP不会被ping下即可获得,增加一定安全性。
三、upstream里的IP不一定是内网,外网IP也可以。不过经典的案例是,局域网中某台IP暴露在外网下,域名直接解析到此IP。然后又这台主服务器转发到内网服务器IP中。
四、某台服务器宕机、不会影响网站正常运行,Nginx不会把请求转发到已宕机的IP上。
更多Nginx相关技术文章,请访问Nginx教程栏目进行学习!
The above is the detailed content of How to configure load balancing in nginx. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

PHP code can be executed in many ways: 1. Use the command line to directly enter the "php file name" to execute the script; 2. Put the file into the document root directory and access it through the browser through the web server; 3. Run it in the IDE and use the built-in debugging tool; 4. Use the online PHP sandbox or code execution platform for testing.

Understanding Nginx's configuration file path and initial settings is very important because it is the first step in optimizing and managing a web server. 1) The configuration file path is usually /etc/nginx/nginx.conf. The syntax can be found and tested using the nginx-t command. 2) The initial settings include global settings (such as user, worker_processes) and HTTP settings (such as include, log_format). These settings allow customization and extension according to requirements. Incorrect configuration may lead to performance issues and security vulnerabilities.

Linux system restricts user resources through the ulimit command to prevent excessive use of resources. 1.ulimit is a built-in shell command that can limit the number of file descriptors (-n), memory size (-v), thread count (-u), etc., which are divided into soft limit (current effective value) and hard limit (maximum upper limit). 2. Use the ulimit command directly for temporary modification, such as ulimit-n2048, but it is only valid for the current session. 3. For permanent effect, you need to modify /etc/security/limits.conf and PAM configuration files, and add sessionrequiredpam_limits.so. 4. The systemd service needs to set Lim in the unit file

When configuring Nginx on Debian system, the following are some practical tips: The basic structure of the configuration file global settings: Define behavioral parameters that affect the entire Nginx service, such as the number of worker threads and the permissions of running users. Event handling part: Deciding how Nginx deals with network connections is a key configuration for improving performance. HTTP service part: contains a large number of settings related to HTTP service, and can embed multiple servers and location blocks. Core configuration options worker_connections: Define the maximum number of connections that each worker thread can handle, usually set to 1024. multi_accept: Activate the multi-connection reception mode and enhance the ability of concurrent processing. s

NGINXserveswebcontentandactsasareverseproxy,loadbalancer,andmore.1)ItefficientlyservesstaticcontentlikeHTMLandimages.2)Itfunctionsasareverseproxyandloadbalancer,distributingtrafficacrossservers.3)NGINXenhancesperformancethroughcaching.4)Itofferssecur

Diagnosis and solutions for common errors of Nginx include: 1. View log files, 2. Adjust configuration files, 3. Optimize performance. By analyzing logs, adjusting timeout settings and optimizing cache and load balancing, errors such as 404, 502, 504 can be effectively resolved to improve website stability and performance.

DebianApache2's SEO optimization skills cover multiple levels. Here are some key methods: Keyword research: Use tools (such as keyword magic tools) to mine the core and auxiliary keywords of the page. High-quality content creation: produce valuable and original content, and the content needs to be conducted in-depth research to ensure smooth language and clear format. Content layout and structure optimization: Use titles and subtitles to guide reading. Write concise and clear paragraphs and sentences. Use the list to display key information. Combining multimedia such as pictures and videos to enhance expression. The blank design improves the readability of text. Technical level SEO improvement: robots.txt file: Specifies the access rights of search engine crawlers. Accelerate web page loading: optimized with the help of caching mechanism and Apache configuration

Through Docker containerization technology, PHP developers can use PhpStorm to improve development efficiency and environmental consistency. The specific steps include: 1. Create a Dockerfile to define the PHP environment; 2. Configure the Docker connection in PhpStorm; 3. Create a DockerCompose file to define the service; 4. Configure the remote PHP interpreter. The advantages are strong environmental consistency, and the disadvantages include long startup time and complex debugging.
