Reputation: 5088
How does one set worker_rlimit_nofile
to a higher number and what's the maximum it can be or is recommended to be?
I'm trying to follow the following advice:
The second biggest limitation that most people run into is also related to your OS. Open up a shell, su to the user nginx runs as and then run the command
ulimit -a
. Those values are all limitations nginx cannot exceed. In many default systems the open files value is rather limited, on a system I just checked it was set to 1024. If nginx runs into a situation where it hits this limit it will log the error (24: Too many open files) and return an error to the client. Naturally nginx can handle a lot more than 1024 files and chances are your OS can as well. You can safely increase this value.To do this you can either set the limit with ulimit or you can use worker_rlimit_nofile to define your desired open file descriptor limit.
From: https://blog.martinfjordvald.com/2011/04/optimizing-nginx-for-high-traffic-loads/
Upvotes: 28
Views: 66570
Reputation: 4743
While setting worker_rlimit_nofile
parameter, you should consider both worker_connections
and worker_processes
. You may want to check your OS's file descriptor first using: ulimit -Hn
and ulimit -Sn
which will give you the per user hard and soft file limits respectively. You can change the OS limit using systemctl as:
sudo sysctl -w fs.file-max=$VAL
where $VAL is the number you would like to set. Then, you can verify using:
cat /proc/sys/fs/file-max
If you are automating the configuration, it is easy to set worker_rlimit_nofile as:
worker_rlimit_nofile = worker_connections*2
The worker_processes is set to 1 by default, however, you can set it to a number less than or equal to the number of cores you have on your server:
grep -c ^processor /proc/cpuinfo
EDIT: The latest versions of nginx set worker_processes: auto by default, which sets to the number of processors available in the machine. Hence, it's important to know why you would really to change it.
Normally, setting it to highest value or to all available processors doesn't always improve the performance beyond certain limit: it's likely you get the same performance by setting it to 24 vs 32 processors. Some kernel/TCP-stack parameters could also help mitigate bottle-necks.
And in micro-services deployment (kubernetes), it's very important to consider pod resource request/limits while setting these configurations.
To check how many workers process nginx has spawned you could run ps -lfC nginx. e.g. on my machine I got the following, since my machine has 12 processors, nginx spawned 12 worker processes.
$ ps -lfC nginx
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
5 S root 70488 1 0 80 0 - 14332 - Jan15 ? 00:00:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
5 S www-data 70489 70488 0 80 0 - 14526 - Jan15 ? 00:08:24 nginx: worker process
5 S www-data 70490 70488 0 80 0 - 14525 - Jan15 ? 00:08:41 nginx: worker process
5 S www-data 70491 70488 0 80 0 - 14450 - Jan15 ? 00:08:49 nginx: worker process
5 S www-data 70492 70488 0 80 0 - 14433 - Jan15 ? 00:08:37 nginx: worker process
5 S www-data 70493 70488 0 80 0 - 14447 - Jan15 ? 00:08:44 nginx: worker process
5 S www-data 70494 70488 0 80 0 - 14433 - Jan15 ? 00:08:46 nginx: worker process
5 S www-data 70495 70488 0 80 0 - 14433 - Jan15 ? 00:08:34 nginx: worker process
5 S www-data 70496 70488 0 80 0 - 14433 - Jan15 ? 00:08:31 nginx: worker process
5 S www-data 70498 70488 0 80 0 - 14433 - Jan15 ? 00:08:46 nginx: worker process
5 S www-data 70499 70488 0 80 0 - 14449 - Jan15 ? 00:08:50 nginx: worker process
5 S www-data 70500 70488 0 80 0 - 14433 - Jan15 ? 00:08:39 nginx: worker process
5 S www-data 70501 70488 0 80 0 - 14433 - Jan15 ? 00:08:41 nginx: worker process
To print the exact count, you could you UID (e.g. for my setup it's UUID is www-data. which is configured in nginx.conf as user www-data;)
$ ps -lfC nginx | awk '/nginx:/ && /www-data/{count++} END{print count}'
12
In kubernetes, nginx spawn worker processes depending on the resource request for the pod by default. e.g if you have the following in your deployment:
resources:
requests:
memory: 2048Mi
cpu: 2000m
Then nginx will spawn 2 worker process (2000 milli cpu = 2 cpu)
Upvotes: 18
Reputation: 281
worker_rlimit_nofile
= worker_connections
* 2 file descriptors
Each worker connection open 2 file descriptors (1 for upstream, 1 for downstream)
Upvotes: 28