user1469734
user1469734

Reputation: 801

Optimal Nginx config to handle thousands of request within seconds

What are the optimal settings for Nginx to handle LOTS of requests at the same time?

My server is configured with Nginx and PHP7.3 on Ubuntu 20.04 LTS. Application that is running is build with Laravel 7.

This is my current config:

location ~ \.php$ {
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
    fastcgi_index index.php;
    fastcgi_buffer_size 14096k;
    fastcgi_buffers 512 14096k;
    fastcgi_busy_buffers_size 14096k;
    fastcgi_connect_timeout 60;
    fastcgi_send_timeout 300;
    fastcgi_read_timeout 300;
}

The fastcgi-parameters I placed I found via Google and tweaked the number to some high values.

Application does the following:

The all four steps can be done within couple of seconds.

Server is not peaking in CPU nor Memory when this is done, the only thing that is happening is that some users get a 502 timeout.

Looks like a server config-issue in Nginx.

This are stats of the server of the moment it happened:

Side note is that I disabled the VerifyCsrfToken in Laravel to the routes that are called to prevent extra server-load.

What am I missing? Do I have to change some PHP-FPM settings also? If so, to which and were can I do that?

This is what the Nginx-error logs of the domain tells me:

2020/04/25 13:58:14 [error] 7210#7210: *21537 connect() to unix:/var/run/php/php7.3-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 54.221.15.18, server: website.url, request: "GET /loader HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.3-fpm.sock:", host: "website.url"

Settings of www.conf:

pm.max_children = 100
pm.start_servers = 25
pm.min_spare_servers = 25
pm.max_spare_servers = 50
pm.max_requests = 9000
;pm.process_idle_timeout = 10s;
;pm.status_path = /status

Upvotes: 9

Views: 5026

Answers (2)

mforsetti
mforsetti

Reputation: 423

(11: Resource temporarily unavailable)

That's EAGAIN/EWOULDBLOCK, that means nginx did accept client connections, but it cannot connect to PHP-FPM's UNIX socket without blocking (waiting), and probably, without looking at nginx's source code, nginx had tried several times connecting to said UNIX socket but failed, so nginx throws a Connection refused.

There's a few ways to solve this, either:

  1. increase listen.backlog config value in your PHP-FPM pool config, with its corresponding net.ipv4.tcp_max_syn_backlog, net.ipv6.tcp_max_syn_backlog, and net.core.netdev_max_backlog values in sysctl.
  2. create multiple php-fpm pools, then use upstream nginx config to use these pools.

Upvotes: 1

Omer YILMAZ
Omer YILMAZ

Reputation: 1263

Edit /etc/security/limits.conf, enter:

# vi /etc/security/limits.conf

Set soft and hard limit for all users or nginx user as follows:

nginx       soft    nofile   10000
nginx       hard    nofile  30000

Upvotes: 0

Related Questions