Home > Operation and Maintenance > Linux Operation and Maintenance > Detailed explanation of nginx high concurrency configuration parameters

Detailed explanation of nginx high concurrency configuration parameters

巴扎黑
Release: 2017-08-23 15:25:58
Original
3027 people have browsed it

1. Generally speaking, the following items in the nginx configuration file are more effective for optimization:

1. worker_processes 8;

The number of nginx processes is recommended to be specified according to the number of cpu , usually a multiple of it (for example, 2 quad-core CPUs are counted as 8).

2. Worker_cpu_Affinity 00000001 00000010000100100100010000 00100000 01000000 100000;

## allocated the CPU for each process. Distribute a

process to multiple CPUs.

3. worker_rlimit_nofile65535;

This command refers to the maximum number of file descriptors opened by an nginx process. The theoretical value should be the maximum number of open files

(ulimit -n) and nginx The number of processes is divided, but nginx allocates requests not so evenly, so it is best to keep the value consistent with ulimit -n.

Now the number of open files under the linux2.6 kernel is 65535, and worker_rlimit_nofile should be filled in with 65535 accordingly.


This is because the allocation of requests to processes during nginx scheduling is not so balanced, so if you fill in 10240 and the total concurrency reaches 30,000-40,000, the number of processes may exceed 10240, and a 502 error will be returned. .

How to view the linux system file descriptor:

[root@web001 ~]# sysctl -a | grep fs.file

fs.file-max = 789972

fs.file-nr = 510 0 789972

4. useepoll;

Use epoll’s I/O model

(

Additional explanation:

Similar to apache, nginx has different event models for different operating systems

A) Standard event model

Select and poll belong to the standard event model. If the current There is no more efficient method in the system, nginx will choose select or poll
B) Efficient event model
Kqueue: used in FreeBSD 4.1+, OpenBSD 2.9+, NetBSD2.0 and MacOS X. MacOS using dual processors X system using kqueue may cause kernel crash.
Epoll: Used in Linux kernel version 2.6 and later systems.


/dev/poll: used on Solaris 7 11/99+, HP/UX 11.22+(eventport), IRIX 6.5.15+ and Tru64 UNIX 5.1A+.


Eventport: used on Solaris 10. In order to prevent kernel crashes, it is necessary to install security patches.

)

5. worker_connections65535;

The maximum number of connections allowed per process. Theoretically, the maximum number of connections per nginx server is worker_processes*worker_connections.

6. keepalive_timeout60;

keepalive timeout.

7. client_header_buffer_size4k;

The buffer size of the client request header. This can be set according to your system paging size. Generally, the size of a request header will not exceed 1k, but due to Generally, system paging is larger than 1k, so the paging size is set here.

The paging size can be obtained with the command getconf PAGESIZE.

[root@web001 ~]# getconf PAGESIZE

4096

But there are also cases where client_header_buffer_size exceeds 4k, but the client_header_buffer_size value must be set to the entire "system paging size" multiple.

8. open_file_cachemax=65535 inactive=60s;

This will specify the cache for open files. The default is not enabled. max specifies the number of caches. It is recommended to be consistent with the number of open files. Inactive is Refers to how long the file has not been requested before the cache is deleted.

9. open_file_cache_valid80s;

This refers to how often to check the cached valid information.

10. open_file_cache_min_uses1;

The minimum number of uses of the file within the inactive parameter in the open_file_cache command. If this number is exceeded, the file descriptor will always be opened in the cache, as in the above example, if If a file is not used within the inactive time, it will be removed.

2. Regarding the optimization of kernel parameters:

net.ipv4.tcp_max_tw_buckets = 6000

The number of timewait, the default is 180000.

net.ipv4.ip_local_port_range = 1024 65000

The port range allowed to be opened by the system.

net.ipv4.tcp_tw_recycle = 1

Enable timewait fast recycling.

net.ipv4.tcp_tw_reuse = 1

Enable reuse. Allows TIME-WAIT sockets to be reused for new TCP connections.

net.ipv4.tcp_syncookies = 1

Enable SYN Cookies. When the SYN waiting queue overflows, enable cookies to handle it.

net.core.somaxconn = 262144

The backlog of the listen function in the web application will limit the net.core.somaxconn of our kernel parameters to 128 by default, while the NGX_LISTEN_BACKLOG defined by nginx defaults to 511 , so it is necessary to adjust this value.

net.core.netdev_max_backlog = 262144

The maximum number of packets allowed to be sent to the queue when each network interface receives packets faster than the kernel can process them.

net.ipv4.tcp_max_orphans = 262144

The maximum number of TCP sockets in the system that are not associated with any user file handle. If this number is exceeded, the orphan connection will be reset immediately and a warning message will be printed. This limit is only to prevent simple DoS attacks. You cannot rely too much on it or artificially reduce this value. You should increase this value (if you add memory).

net.ipv4.tcp_max_syn_backlog = 262144

The maximum value of recorded connection requests that have not yet received client confirmation information. For systems with 128M of memory, the default value is 1024, and for systems with small memory, it is 128.

net.ipv4.tcp_timestamps = 0

Time stamps can avoid sequence number wrapping. A 1Gbps link will definitely encounter sequence numbers that have been used before. The timestamp allows the kernel to accept such "abnormal" packets. It needs to be turned off here.

net.ipv4.tcp_synack_retries = 1

In order to open a connection to the peer, the kernel needs to send a SYN with an ACK in response to the previous SYN. This is the second handshake in the so-called three-way handshake. This setting determines the number of SYN+ACK packets sent by the kernel before giving up the connection.

net.ipv4.tcp_syn_retries = 1

The number of SYN packets sent before the kernel gives up establishing the connection.

net.ipv4.tcp_fin_timeout = 1

If the socket is requested to be closed by the local end, this parameter determines how long it remains in the FIN-WAIT-2 state. The peer can make errors and never close the connection, or even crash unexpectedly. The default value is 60 seconds. 2.2 The usual value for the kernel is 180 seconds. 3 You can press this setting, but remember that even if your machine is a lightly loaded WEB server, there is a risk of memory overflow due to a large number of dead sockets. FIN -WAIT-2 is less dangerous than FIN-WAIT-1 because it can only eat up to 1.5K of memory, but their lifetime is longer.

net.ipv4.tcp_keepalive_time = 30

How often TCP sends keepalive messages when keepalive is enabled. The default is 2 hours.

3. A complete kernel optimization setting is posted below:

vi /etc/sysctl.conf In CentOS5.5, all contents can be cleared and directly replaced with the following content:

net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.ipv4 .tcp_max_tw_buckets = 6000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4 194304
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 262144
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_keepalive_time = 30
net.ipv4.ip_local_port_range = 1024 65000

To make the configuration take effect immediately, use the following command:
/sbin/sysctl -p

4. The following is about the optimization of the number of system connections

The default value of Linux open files and max user processes is 1024

#ulimit -n

1024

#ulimit –u

1024

Problem description: Description server only allows 1024 files to be opened at the same time and handles 1024 user processes

Use ulimit -a to view the current system All limit values, use ulimit -n to view the current maximum number of open files.

The newly installed Linux only has 1024 by default. When used as a server with a heavy load, it is easy to encounter error: too many open files. Therefore, it needs to be made larger.

Solution:

Use ulimit –n 65535 to modify it immediately, but it will be invalid after restarting. (Note that ulimit -SHn 65535 is equivalent to ulimit-n 65535, -S refers to soft, and -H refers to hard)

There are three modification methods as follows:

1. In /etc/rc. Add a line ulimit -SHn 65535
2 in local. Add a ulimit line -SHn 65535
3 in /etc/profile. Add at the end of /etc/security/limits.conf:

* soft nofile 65535
* hard nofile 65535
* soft nproc 65535
* hard nproc 65535

Which one to use? The first method has no effect in CentOS. Use the third method. It has an effect, and using the second method in Debian has an effect

# ulimit -n

65535

# ulimit -u

65535

Note: The ulimit command itself has soft and hard settings. Add -H for hard, add -S for soft. The default display is soft limit

soft limit refers to the current System effective setting value. The hard limit value can be lowered by ordinary users. But it cannot be increased. The soft limit cannot be set higher than the hard limit. Only the root user can increase the hard limit value.

5. The following is a simple nginx configuration file:

user www www;
worker_processes 8;
worker_cpu_affinity 00000001 00000010 00000100 00001000 0001000000100000
01000000;
error_log /www/log/nginx_error.log crit;
pid /usr/local/nginx/nginx.pid;
worker_rlimit_nofile 204800;
events
{
use epoll;
worker_connections 204800;
}
http
{
include mime.types;
default_type application/octet-stream;
charset utf-8;
server_names_hash_bucket_size 128;
client_header_buffer_size 2k;
large_client_header_buffers 4 4k;
client_max_body_size 8m;
sendfile on;
tcp_nopush on;
keepalive_timeout 60;
fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2
keys_zone=TEST:10m
inactive=5m;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 4k;
fastcgi_buffers 8 4k;
fastcgi_busy_buffers_size 8k;
fastcgi_temp_file_write_size 8k;
fastcgi_cache TEST;
fastcgi_cache_valid 200 302 1h;
fastcgi_cache_valid 301 1d;
fastcgi_cache_valid any 1m;
fastcgi_cache_min_uses 1;
fastcgi_cache_use_stale error timeout invalid_headerhttp_500;
open_file_cache max=204800 inactive=20s;
open_file_cache_min_uses 1;
open_file_cache_valid 30s;
tcp_nodelay on;
gzip on;
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types text/plain application/x-javascript text/cssapplication/xml;
gzip_vary on;
server
{
listen 8080;
server_name backup.aiju.com;
index index.php index.htm;
root /www/html/;
location /status
{
stub_status on;
}
location ~ .*\.(php|php5)?$
{
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fcgi.conf;
}
location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|js|css)$
{
expires 30d;
}
log_format access '$remote_addr -- $remote_user [$time_local]"$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $http_x_forwarded_for';
access_log /www/log/access.log access;
}
}

六、关于FastCGI 的几个指令:

fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2keys_zone=TEST:10minactive=5m;

这个指令为FastCGI 缓存指定一个路径,目录结构等级,关键字区域存储时间和非活动删除时间。

fastcgi_connect_timeout 300;

指定连接到后端FastCGI 的超时时间。

fastcgi_send_timeout 300;

向FastCGI 传送请求的超时时间,这个值是指已经完成两次握手后向FastCGI 传送请求的超时时间。

fastcgi_read_timeout 300;

接收FastCGI 应答的超时时间,这个值是指已经完成两次握手后接收FastCGI 应答的超时时间。

fastcgi_buffer_size 4k;

指定读取FastCGI应答第一部分需要用多大的缓冲区,一般第一部分应答不会超过1k,由于页面大小为4k,所以这里设置为4k。

fastcgi_buffers 8 4k;

指定本地需要用多少和多大的缓冲区来缓冲FastCGI 的应答。

fastcgi_busy_buffers_size 8k;

这个指令我也不知道是做什么用,只知道默认值是fastcgi_buffers 的两倍。

fastcgi_temp_file_write_size 8k;

在写入fastcgi_temp_path 时将用多大的数据块,默认值是fastcgi_buffers 的两倍。

fastcgi_cache TEST

开启FastCGI 缓存并且为其制定一个名称。个人感觉开启缓存非常有用,可以有效降低CPU 负载,并且防止502 错误。

fastcgi_cache_valid 200 302 1h;
fastcgi_cache_valid 301 1d;
fastcgi_cache_valid any 1m;

为指定的应答代码指定缓存时间,如上例中将200,302 应答缓存一小时,301 应答缓存1 天,其他为1 分钟。

fastcgi_cache_min_uses 1;

缓存在fastcgi_cache_path 指令inactive 参数值时间内的最少使用次数,如上例,如果在5 分钟内某文件1次也没有被使用,那么这个文件将被移除。

fastcgi_cache_use_stale error timeout invalid_headerhttp_500;

不知道这个参数的作用,猜想应该是让nginx 知道哪些类型的缓存是没用的。以上为nginx 中FastCGI相关参数,另外,FastCGI 自身也有一些配置需要进行优化,如果你使用php-fpm来管理FastCGI,可以修改配置文件中的以下值:

60

同时处理的并发请求数,即它将开启最多60 个子线程来处理并发连接。

102400

最多打开文件数。

204800

每个进程在重置之前能够执行的最多请求数。

The above is the detailed content of Detailed explanation of nginx high concurrency configuration parameters. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template