Home  >  Article  >  Backend Development  >  Nginx+Tomcat does load balancing

Nginx+Tomcat does load balancing

WBOY
WBOYOriginal
2016-07-30 13:31:14911browse

Nginx Load Balancing

Recent projects have to be designed with concurrency in mind, so when designing the project architecture, I considered using Nginx to build a Tomcat cluster, and then using Redis to build a distributed Session. I will share my exploration process step by step below.

Although Nginx is small, it is indeed very powerful in terms of functions. It supports reverse proxy, load balancing, data caching, URL rewriting, read-write separation, dynamic and static separation, etc. Next, I will talk about the configuration of load balancing. The next article will test the combination with Redis.

Nginx load balancing scheduling method

Nginx’s load balancing module upstream module mainly supports the following 4 scheduling algorithms:

1. Server polling (default mode): each request is accessed in chronological order Allocate to different servers one by one. If a back-end server goes down, the faulty system will be automatically eliminated so that user access will not be affected. Weight specifies the weight of polling. The larger the Weight value, the higher the access probability assigned. It is mainly used when the server-side performance is uneven.

2. ip_hash: Each request is allocated according to the hash value of the accessed IP. This row of users from the same IP will be fixed to a server on the backend. After the server is fixed, the problem of session sharing on the web page can be effectively solved The problem.

3. Fair: This algorithm can intelligently make load balancing decisions based on page size and loading time, that is, allocate requests based on the response time of the back-end server, and prioritize the response time period. Nginx itself does not integrate into the fair module. If you need this scheduling algorithm, you must download the upstream_fair module of Nginx, and then configure and load it in the config.

4. url_hash: This scheduling algorithm allocates requests based on the hash result of the visited URL, so that each URL is directed to the same back-end server, which can further improve the efficiency of the back-end server. Nginx itself does not integrate this module. If you use it, you need to install the hash package of Nginx and compile it and load it into nginx.

Status parameters supported by Nginx’s upstream module

In the upstream module of http, you can specify the IP address and port of the back-end server through the server command, and you can also set the load balancing of each back-end server. The status of scheduling. The usually set status parameters are as follows:

1. down: Indicates that the current server does not participate in load balancing temporarily.

2. backup: reserved backup server. The backup server will be requested only when all other non-backup machines fail or are busy, so this server has the least pressure.

3. max_fails: The number of allowed request failures, the default is 1. When the maximum number of times is exceeded, the error defined by the proxy_next_upstream module is returned.

4. fail_timeout: The time to suspend the service after max_fails failures. max_fails can be used together with fail_timeout.

Note: When the load balancing scheduling algorithm uses ip_hash, the status of the backend server in the load balancing scheduling cannot be weight and backup.

Nginx parameter configuration and description

#user  nobody;
worker_processes  2;

error_log  logs/error.log;
error_log  logs/error.log  notice;
error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    gzip  on;
    gzip_min_length 1k;
    gzip_buffers 4 16k;
    gzip_http_version 1.0;
    gzip_vary on;


    upstream andy {
		server 192.168.1.110:8080 weight=1 max_fails=2 fail_timeout=30s;
		server 192.168.1.111:8080 weight=1 max_fails=2 fail_timeout=30s;
        ip_hash;
    }

    server {
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

	location /andy_server {
	    proxy_next_upstream http_502 http_504 error timeout invalid_header;
	    proxy_set_header Host  $host;
	    proxy_set_header X-Real-IP $remote_addr;
	    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	    proxy_pass http://andy; #此处proxy_pass定义的name需要跟upstream 里面定义的name一致
	    expires      3d;
		
	   #以下配置可省略
	   client_max_body_size        10m;
	   client_body_buffer_size     128k;
	   proxy_connect_timeout       90;
	   proxy_send_timeout          90;
	   proxy_read_timeout          90;
	   proxy_buffer_size           4k;
	   proxy_buffers               4 32k;
	   proxy_busy_buffers_size     64k;
	   proxy_temp_file_write_size 64k;
	}

        error_page  404              /404.html;
		
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }

}

Note: See the previous article for detailed configuration explanation.

Nginx load balancing test

Now Nginx is deployed on 192.168.1.110, and tomcat servers deployed on 192.168.1.110 and 192.168.111.

1. When opening http://192.168.1.110/andy_server/, when the Nginx load cluster adopts the default mode, the server will be polled every time. This method cannot solve the cluster session problem.                                                                                                                                               

This method solves the session problem. If the 192.168.1.110 server goes down, Nginx will transfer the request to the server that is not down (after testing, the 192.168.1.110 server is shut down, and then the request will jump to 192.168.1.111 server). But there is also a problem. When the hashed server goes down and Nginx is transferred to another server, the session will naturally be lost.

3. The remaining two corresponding modules required to install Nginx are not tested in the same way as above.

Summary

No matter which load balancing method is used, session loss will occur. To solve this problem, the session must be stored separately, whether it is a database, a file, or a distributed memory server, which is essential for cluster construction. The next article will test and solve the session problem

Copyright statement: This article is an original article by the blogger and may not be reproduced without the blogger's permission.

The above has introduced Nginx+Tomcat for load balancing, including aspects of it. I hope it will be helpful to friends who are interested in PHP tutorials.

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn