Home > php教程 > php手册 > body text

Detailed explanation of Nginx configuration file (nginx.conf) configuration

WBOY
Release: 2016-10-22 00:00:10
Original
1369 people have browsed it

Nginx configuration file nginx.conf configuration details are as follows:

user nginx nginx;

Nginx user and group: user group. Not specified under window

worker_processes 8;

Worker processes: number. Depending on the hardware adjustment, it is usually equal to the number of CPUs or 2 times the CPU.

error_log logs/error.log;

error_log logs/error.log notice;

error_log logs/error.log info;

Error log: storage path.

pid logs/nginx.pid;

pid (process identifier): storage path.

worker_rlimit_nofile 204800;

Specifies the maximum number of descriptors that a process can open.

This command refers to the maximum number of file descriptors opened by an nginx process. The theoretical value should be the maximum number of open files (ulimit -n) divided by the number of nginx processes. However, nginx allocation requests are not so even, so it is best to match The value of ulimit -n remains consistent.

Now under the Linux 2.6 kernel, the number of open files is 65535, and worker_rlimit_nofile should be filled in with 65535 accordingly.

This is because the allocation of requests to processes during nginx scheduling is not so balanced, so if you fill in 10240 and the total concurrency reaches 30,000-40,000, some processes may exceed 10240, and a 502 error will be returned.

events

{

use epoll;

Use epoll’s I/O model. Linux recommends epoll, FreeBSD recommends kqueue, and it is not specified under window.

Additional explanation:

Similar to apache, nginx has different event models for different operating systems

A) Standard event model

Select and poll belong to the standard event model. If there is no more effective method in the current system, nginx will choose select or poll

B) Efficient event model

Kqueue: used in FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS X. Using kqueue on a MacOS X system using dual processors may cause a kernel crash.

Epoll: used in Linux kernel version 2.6 and later systems.

/dev/poll: used in Solaris 7 11/99+, HP/UX 11.22+ (eventport), IRIX 6.5.15+ and Tru64 UNIX 5.1A+.

Eventport: used in Solaris 10. In order to prevent kernel crashes, it is necessary to install security patches.

worker_connections 204800;

The maximum number of connections per worker process. Adjust according to the hardware and use it in conjunction with the previous working process. Try to make it as large as possible, but don't run the CPU to 100%. The maximum number of connections allowed per process. Theoretically, the maximum number of connections per nginx server is. worker_processes*worker_connections

keepalive_timeout 60;

keepalive timeout.

client_header_buffer_size 4k;

The buffer size of the client request header. This can be set according to your system paging size. Generally, the size of a request header will not exceed 1k. However, since system paging is generally larger than 1k, the paging size is set here.

The paging size can be obtained with the command getconf PAGESIZE.

[root@web001 ~]# getconf PAGESIZE

4096

But there are also cases where client_header_buffer_size exceeds 4k, but the value of client_header_buffer_size must be set to an integral multiple of "system paging size".

open_file_cache max=65535 inactive=60s;

This will specify the cache for open files. It is not enabled by default. max specifies the number of caches. It is recommended to be consistent with the number of open files. Inactive refers to how long the file is deleted after it has not been requested. cache.

open_file_cache_valid 80s;

This refers to how often to check the cached valid information.

open_file_cache_min_uses 1;

The minimum number of uses of the file within the inactive parameter time in the open_file_cache directive. If this number is exceeded, the file descriptor is always opened in the cache, as in the above example, if a file has not been used once within the inactive time , it will be removed.

}

##Set the http server and use its reverse proxy function to provide load balancing support

http

{

include mime.types;

Set the mime type, the type is defined by the mime.type file

default_type application/octet- stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '

'$status $body_bytes_sent "$http_referer" '

'"$http_user_agent" "$http_x_forwarded_for"';

log_format log404 '$status [ $time_local] $remote_addr $host$request_uri $sent_http_location';

Log format setting.

$remote_addr and $http_x_forwarded_for are used to record the client’s IP address;

$remote_user: used to record the client user name;

$time_local: used to record the access time and time zone;

$request: used to record the requested url and http Protocol;

$status: used to record the request status; success is 200,

$body_bytes_sent: record the size of the body content of the file sent to the client;

$http_referer: used to record the page link accessed from;

$http_user_agent: record Relevant information about the customer's browser;

Usually the web server is placed behind the reverse proxy, so the customer's IP address cannot be obtained. The IP address obtained through $remote_add is the IP address of the reverse proxy server. The reverse proxy server can add x_forwarded_for information in the http header information of the forwarded request to record the IP address of the original client and the server address of the original client's request.

access_log logs/host.access.log main;

access_log logs/host.access.404.log log404;

After using the log_format directive to set the log format, you need to use the access_log directive to specify the storage path of the log file;

server_names_hash_bucket_size 128;

#The hash table that saves server names is controlled by the instructions server_names_hash_max_size and server_names_hash_bucket_size. The parameter hash bucket size is always equal to the size of the hash table and is a multiple of the processor cache size. After reducing the number of accesses in memory, it is possible to speed up the search for hash table key values ​​in the processor. If the hash bucket size is equal to the size of the processor cache, then when searching for a key, the number of searches in the memory is 2 in the worst case. The first time is to determine the address of the storage unit, and the second time is to find the key value in the storage unit. Therefore, if Nginx gives a prompt that the hash max size or hash bucket size needs to be increased, the first thing to do is to increase the size of the previous parameter.

client_header_buffer_size 4k;

The buffer size of the client request header. This can be set according to your system paging size. Generally, the header size of a request will not exceed 1k. However, since system paging is generally larger than 1k, the paging size is set here. The paging size can be obtained with the command getconf PAGESIZE.

large_client_header_buffers 8 128k;

Client request header buffer size. By default, nginx will use the client_header_buffer_size buffer to read the header value. If the header is too large, it will use large_client_header_buffers to read it.

open_file_cache max=102400 inactive=20s;

This directive specifies whether the cache is enabled.

Example: open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;

open_file_cache_min_uses 2;

open_file_cache_errors on;

open_file_cache_errors

Syntax: open_file_cache_errors on | off Default value: open_file_ cache_errors off using fields: http, server, location This directive specifies whether in Searching a file logs cache errors.
open_file_cache_min_uses

Syntax: open_file_cache_min_uses number Default value: open_file_cache_min_uses 1 Usage fields: http, server, location This directive specifies the minimum number of files that can be used within a certain time range in the invalid parameters of the open_file_cache directive. , if a larger value is used, the file descriptor is always open in the cache.

open_file_cache_valid
Syntax: open_file_cache_valid time Default value: open_file_cache_valid 60 Usage fields: http, server, location This directive specifies when the cache in open_file_cache needs to be checked Valid information of the project.

client_max_body_size 300m;

Set the size of the file uploaded through nginx

sendfile on;

The sendfile command specifies whether nginx calls the sendfile function (zero copy mode) to output the file. For ordinary applications, it must be set to on. If it is used for disk IO heavy load applications such as downloading, it can be set to off to balance the disk and network IO processing speed and reduce system uptime.

tcp_nopush on;

This option allows or disables the TCP_CORK option of using socket. This option is only used when using sendfile.

proxy_connect_timeout 90;

Backend server connection timeout_initiating handshake and waiting for response timeout
proxy_read_timeout 180;

After the connection is successful, the waiting time for the back-end server to respond has actually entered the back-end queue waiting for processing (it can also be said to be the time for the back-end server to process the request).

proxy_send_timeout 180;

The back-end server data return time is specified. The backend server must transmit all the data within the time

proxy_buffer_size 256k;

Set the buffer size of the first part of the response read from the proxy server. Usually this part of the response contains a small response header. By default The size of this value is the size of a buffer specified in the proxy_buffers directive, but it can be set to smaller

proxy_buffers 4 256k;

Set the number and size of buffers used to read responses (from the proxy server), default The situation is also the paging size, which may be 4k or 8k depending on the operating system

proxy_busy_buffers_size 256k;

proxy_temp_file_write_size 256k;

Set the size of data when writing proxy_temp_path to prevent a worker process from blocking for too long when passing files

proxy_temp_path /data0 /proxy_temp_dir;

proxy_temp_path and the paths specified by proxy_cache_path must be in the same partition

proxy_cache_path /data0/proxy_cache_dir levels=1:2 keys_zone=cache_one:200m inactive=1d max_size=30g;

#Set the memory cache space size to 200MB, not available for 1 day The accessed content is automatically cleared, and the hard disk cache space is 30GB.
keepalive_timeout 120;

keepalive timeout.

tcp_nodelay on;

client_body_buffer_size 512k;

If you set it to a relatively large value, such as 256k, then it is normal whether you use firefox or IE browser to submit any image smaller than 256k. The problem arises if you comment out this directive and use the default client_body_buffer_size setting, which is twice the operating system page size, 8k or 16k.
Whether using firefox4.0 or IE8.0, submitting a relatively large image of about 200k will return a 500 Internal Server Error

proxy_intercept_errors on;

Indicates that nginx blocks responses with an HTTP response code of 400 or higher. Bupstream bakend {

Server 127.0.0.1:8027;

Seerver 127.0.0.1:8028;

Seerver 127.0.0.1:8029;

hash $ Request_uri;

1, wheel Query (default)

Each request is assigned to different back-end servers one by one in chronological order. If the back-end server goes down, it can be automatically eliminated.

2. Weight

Specifies the polling probability, weight is proportional to the access ratio, and is used when the back-end server performance is uneven.

For example:

upstream bakend {
server 192.168.0.14 weight=10;
server 192.168.0.15 weight=10;
}
2. ip_hash
Each request is allocated according to the hash result of the accessed IP, so that each visitor accesses one fixed The back-end server can solve session problems.
For example:

upstream bakend {
ip_hash;
server 192.168.0.14:88;
server 192.168.0.15:80;
}
3. fair (third party)
Allocate requests and response time according to the response time of the backend server Short ones are allocated first.
upstream backend {

server server1;
server server2;
fair;
}
4. url_hash (third party)
Distribute requests according to the hash result of the accessed url, so that each url is directed to the same backend server, backend It is more effective when the server is cached.
Example: Add a hash statement to the upstream. Other parameters such as weight cannot be written in the server statement. hash_method is the hash algorithm used

upstream backend {

server squid1:3128;

server squid2:3128;

hash $request_uri;
hash_method crc32;
}
tips:
upstream bakend{#Define the IP and device status of the load balancing device}{
ip_hash;

server 127.0.0.1:9090 down;

server 127.0.0.1:8080 weight=2;
server 127.0 .0.1:6060;
server 127.0.0.1:7070 backup;
}
Add
proxy_pass http://bakend/;
proxy_pass http://bakend/;
The status of each device is set to:
1.down means single The previous server will not participate in the load for the time being

2. Weight means that the larger the weight, the greater the weight of the load.
3.max_fails: The number of allowed request failures defaults to 1. When the maximum number is exceeded, the error defined by the proxy_next_upstream module is returned.
4.fail_timeout: The pause time after max_fails failures.
5.backup: When all other non-backup machines are down or busy, request the backup machine. So this machine will have the least pressure.
nginx supports setting up multiple groups of load balancing at the same time for use by unused servers.

client_body_in_file_only is set to On. You can record the data from client post to a file for debugging.
client_body_temp_path sets the directory of the record file. You can set up to 3 levels of directories.

location matches the URL. You can redirect or perform a new proxy load. Balance

##Configure virtual machine

server

{

listen 80;

Configure listening port

server_name image.***.com;

Configure access domain name

location ~* .(mp3|exe)$ {

Pair with "mp3 or exe" for load balancing

proxy_pass http://img_relay$request_uri;

Set the port or socket of the proxy server, and URL

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header rewrite ^(.*)$ http://211.151.188.190:8080/face.jpg redirect;

}

proxy_pass http://img_relay$request_uri;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header .188.190:8080/face.jpg redirect;

}

location /image {

if ($http_user_agent ~* "xnp") {

rewrite ^(.*)$ http://211.151.188.190:8080/face.jpg redirect;

}

proxy_pass http://img_relay$request_uri;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header etch;

}

location @fetch {

access_log /data/logs/image.log log404;

rewrite ^(.*)$ http://211.151.188.190:8080/face.jpg redirect;

}

}

##Other examples

server

{

listen 80;

server_name *.***.com *.***.cn;

location ~* .(mp3|exe)$ {

proxy_pass http://img_relay$request_uri;

proxy_set_header Host $ host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header //i1.***img.com/help/noimg.gif redirect;

}

proxy_pass http://img_relay$request_uri;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X- Forwarded-For $proxy_add_x_forwarded_for;

#error_page 404 http://i1.***img.com/help/noimg.gif;

error_page 404 502 = @fetch;

}

location @fetch {

access_log /data/logs /baijiaqi.log log404;

rewrite ^(.*)$ http://i1.***img.com/help/noimg.gif redirect;

}

}

server

{

listen 80;

server_name *. ***img.com;

location ~* .(mp3|exe)$ {

proxy_pass http://img_relay$request_uri;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded -For $proxy_add_x_forwarded_for;

}

location / {

if ($http_user_agent ~* "xnp") {

rewrite ^(.*)$ http://i1.***img.com/help/noimg.gif;

}

proxy_pass http://img_relay$request_uri;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header *img.com/help/noimg.gif;

error_page 404 = @fetch;

}

#access_log off;

location @fetch {

access_log /data/logs/baijiaqi.log log404;

rewrite ^(.*)$ http://i1.***img.com/help/noimg.gif redirect;

}

}

server

{

listen 8080;

server_name ngx-ha.***img.com;

location / {

stub_status on;

access_log off;

}

}

server {

listen 80;

server_name imgsrc1.***.net;

root html;

}

server {

listen 80;

server_name ***.com w. ***.com;

# access_log /usr/local/nginx/logs/access_log main;

location / {

rewrite ^(.*)$ http://www.***.com/ ;

}

}

server {

listen 80;

server_name *******.com w.*******.com;

# access_log /usr/local/nginx/logs/access_log main;

location / {

rewrite ^(.*)$ http://www.*******.com/;

}

}

server {

listen 80;

server_name ******.com;

# access_log / usr/local/nginx/logs/access_log main;

location / {

rewrite ^(.*)$ http://www.******.com/;

}

}

location /NginxStatus {

stub_status on;

access_log on;

auth_basic "NginxStatus";

auth_basic_user_file conf/htpasswd;

}

#Set the address to view Nginx status

location ~ /.ht {

deny all;

}

#Disable access to .htxxx files

}

Note: Variables

The Ngx_http_core_module module supports built-in variables, and their names are consistent with apache’s built-in variables.

First, explain the lines in the customer request title, such as $http_user_agent, $http_cookie, etc.

In addition, there are some other variables

$args This variable is equal to the parameters in the request line
$content_length is equal to the value of "Content_Length" of the request line.

$content_type is equal to the value of "Content_Type" in the request header

$document_root is equal to the value specified by the root directive of the current request

$document_uri is the same as $uri

$host is the value specified in the "Host" line in the request header or It is the same as the name of the server reached by the request (without the Host line)

$limit_rate allows limited connection rate

$request_method is equivalent to the request method, usually "GET" or "POST"

$remote_addr client ip

$remote_port client port

$remote_user is equivalent to the username, authenticated by ngx_http_auth_basic_module

$request_filename The path name of the currently requested file, composed of root or alias and URI request

$request_body_file

$request_uri The complete initial URI containing parameters

$query_string and $args are the same as

$sheeme http mode (http, https) is evaluated on request. For example

Rewrite ^(.+)$ $sheme://example.com$; Redirect;

$server_protocol is equivalent to the protocol of request, use " HTTP/ or "HTTP/

$server_addr request reaches the IP address of the server. Generally, the purpose of obtaining the value of this variable is to make a system call. In order to avoid system calls, it is necessary to specify the ip in the listen command and use the bind parameter.

$server_name The name of the server where the request reaches

$server_port The port number of the server where the request reaches

$uri is equivalent to the URI in the current request, which can be different from the initial value, such as when redirecting internally or using index

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Recommendations
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!