What causes a 'Too many open files' error in Nginx?
When Nginx experiences a "Too many open files" error, it is usually because the system or process has reached the file descriptor limit. Solutions include: 1. Increase the soft and hard limits of Linux system, set the relevant parameters of nginx or run users in /etc/security/limits.conf; 2. Adjust the worker_connections value of Nginx to adapt to expected traffic and ensure reloaded configuration; 3. Increase the system-level file descriptor upper limit fs.file-max, edit /etc/sysctl.conf and apply changes; 4. Optimize log and resource usage, reduce unnecessary file handle usage, such as using open_log_file_cache, merging logs, avoiding redundant proxy connections, etc. After the adjustment is completed, you can monitor the actual number of open files through the lsof command.
When Nginx throws a “Too many open files” error, it usually means the system or process has hit its file descriptor limit. This can lead to failed connections, stalled services, or even crashes if not addressed.
Here's what typically causes this issue and how to handle it.
1. File Descriptor Limits in Linux
Linux systems impose limits on the number of file descriptors (FDs) that a process can open. These limits come in two flavors: soft and hard.
- Soft limit – What the process is currently allowed to use.
- Hard limit – The maximum value the soft limit can be raised to.
If Nginx reaches the soft limit, you'll see the “Too many open files” message in the logs. You can check current limits using:
ulimit -n
To increase the limit, edit /etc/security/limits.conf
and add:
nginx soft nofile 65536 nginx hard nofile 65536
Or for the user running Nginx:
www-data soft nofile 65536 www-data hard nofile 65536
Also make sure pam_limits.so
is enabled in your PAM config so these settings are applied at login.
2. Nginx Worker Connections Setting
In nginx.conf
, there's a directive called worker_connections
. It defines how many simulateneous connections each worker process can handle.
This line might look like:
events { worker_connections 1024; }
Each connection uses at least one file descriptor — sometimes more if SSL or upstream connections are involved.
So if you're handling thousands of concurrent users, the default 1024 may be too low.
You should:
- Estimate your expected traffic.
- Multiply by the average FDs per connection (often 2–4).
- Set
worker_connections
higher than that.
Don't forget to reload Nginx after changing this:
nginx -s reload
Also keep an eye on the total number of worker processes multipleplied by worker_connections
, because that gives you the total max connections across all workers.
3. System-wide File Descriptor Cap
Even if you configure Nginx and user limits correctly, the entire system also has a global FD cap controlled by fs.file-max
.
Check current value with:
cat /proc/sys/fs/file-max
If it's low, raise it by editing /etc/sysctl.conf
:
fs.file-max = 2097152
Then apply changes:
sysctl -p
This step is often overlooked but essential under high load. Think of it as the ceiling for all processes combined — including Nginx, PHP, MySQL, etc.
4. Open Log Files and Unused Resources
Every access log, error log, or upstream connection Nginx opens consumes a file descriptor.
If you have dozens of virtual hosts, each writing to separate logs, those add up fast.
Some things to consider:
- Use
open_log_file_cache
to reduce overhead. - Consolidate logs where possible.
- Avoid unnecessary upstream blocks or proxy connections.
Also, some modules or misconfigured third-party integrations might leak FDs over time — especially if they don't close upstream connections properly.
Basically, the "Too many open files" error comes down to limits being too low for the workload. Check ulimits, tweak worker_connections
, raise system-wide caps, and minimize unequisany file handles. Once configured, monitor with tools like lsof -p $(pidof nginx)
to see what's actually open.
The above is the detailed content of What causes a 'Too many open files' error in Nginx?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

NGINX and Apache are both powerful web servers, each with unique advantages and disadvantages in terms of performance, scalability and efficiency. 1) NGINX performs well when handling static content and reverse proxying, suitable for high concurrency scenarios. 2) Apache performs better when processing dynamic content and is suitable for projects that require rich module support. The selection of a server should be decided based on project requirements and scenarios.

NGINX is more suitable for handling high concurrent connections, while Apache is more suitable for scenarios where complex configurations and module extensions are required. 1.NGINX is known for its high performance and low resource consumption, and is suitable for high concurrency. 2.Apache is known for its stability and rich module extensions, which are suitable for complex configuration needs.

NGINX and Apache each have their own advantages and disadvantages, and the choice should be based on specific needs. 1.NGINX is suitable for high concurrency scenarios because of its asynchronous non-blocking architecture. 2. Apache is suitable for low-concurrency scenarios that require complex configurations, because of its modular design.

PHP code can be executed in many ways: 1. Use the command line to directly enter the "php file name" to execute the script; 2. Put the file into the document root directory and access it through the browser through the web server; 3. Run it in the IDE and use the built-in debugging tool; 4. Use the online PHP sandbox or code execution platform for testing.

Understanding Nginx's configuration file path and initial settings is very important because it is the first step in optimizing and managing a web server. 1) The configuration file path is usually /etc/nginx/nginx.conf. The syntax can be found and tested using the nginx-t command. 2) The initial settings include global settings (such as user, worker_processes) and HTTP settings (such as include, log_format). These settings allow customization and extension according to requirements. Incorrect configuration may lead to performance issues and security vulnerabilities.

Linux system restricts user resources through the ulimit command to prevent excessive use of resources. 1.ulimit is a built-in shell command that can limit the number of file descriptors (-n), memory size (-v), thread count (-u), etc., which are divided into soft limit (current effective value) and hard limit (maximum upper limit). 2. Use the ulimit command directly for temporary modification, such as ulimit-n2048, but it is only valid for the current session. 3. For permanent effect, you need to modify /etc/security/limits.conf and PAM configuration files, and add sessionrequiredpam_limits.so. 4. The systemd service needs to set Lim in the unit file

When configuring Nginx on Debian system, the following are some practical tips: The basic structure of the configuration file global settings: Define behavioral parameters that affect the entire Nginx service, such as the number of worker threads and the permissions of running users. Event handling part: Deciding how Nginx deals with network connections is a key configuration for improving performance. HTTP service part: contains a large number of settings related to HTTP service, and can embed multiple servers and location blocks. Core configuration options worker_connections: Define the maximum number of connections that each worker thread can handle, usually set to 1024. multi_accept: Activate the multi-connection reception mode and enhance the ability of concurrent processing. s

NGINXserveswebcontentandactsasareverseproxy,loadbalancer,andmore.1)ItefficientlyservesstaticcontentlikeHTMLandimages.2)Itfunctionsasareverseproxyandloadbalancer,distributingtrafficacrossservers.3)NGINXenhancesperformancethroughcaching.4)Itofferssecur
