Steve Hanov
Steve Hanov

Reputation: 11574

How can I make nginx handle fastcgi requests concurrently?

Using a minimal fastcgi/nginx configuration on ubuntu 18.04, it looks like nginx only handles one fastcgi request at a time.

# nginx configuration
location ~ ^\.cgi$ { 
    # Fastcgi socket
    fastcgi_pass  unix:/var/run/fcgiwrap.socket;

    # Fastcgi parameters, include the standard ones
    include /etc/nginx/fastcgi_params;
}

I demonstrate this by using a cgi script like this:

#!/bin/bash

echo "Content-Type: text";
echo;
echo;
sleep 5;
echo Hello world

Use curl to access the script from two side-by-side command prompts, and you will see that the server handles the requests sequentially.

How can I ensure nginx handles fastcgi requests in parallel?

Upvotes: 5

Views: 3052

Answers (2)

Bin Ni
Bin Ni

Reputation: 51

Nginx is a non-blocking server, even when working with fcgiwrap as a backend. So the number of nginx processes should not be the cause of the problem. The real solution is to increase the number of fcgiwrap processes using the -c option. If fcgiwrap is launched with -c2, even with one Nginx worker process, you can run 2 cgi scripts in parallel.

Upvotes: 5

Soleil
Soleil

Reputation: 7287

In order to have Nginx handles fastcgi requests in parallel you'll need several things:

  1. Nginx >= 1.7.1 for threadpools, and this configuration:
worker_processes N; // N as integer or auto

where N is the number of processes, auto number of processes will equate the number of cores; if you have many IO, you might want to go beyond this number (having as many processes/threads as cores is not a warranty that the CPU will be saturated).

In terms of NGINX, the thread pool is performing the functions of the delivery service. It consists of a task queue and a number of threads that handle the queue. When a worker process needs to do a potentially long operation, instead of processing the operation by itself it puts a task in the pool’s queue, from which it can be taken and processed by any free thread.

Consequently, you want to choose N bigger than the maximum number of parallel requests. Hence you can pick say 1000, even if you got 4 cores; for IO, threads will only take some memory, not much CPU.

  1. When you have many IO requests with large latencies, you'll also need aio threads in the 'http', 'server', or 'location' context, which is a short for:
# in the 'main' context
thread_pool default threads=32 max_queue=65536;

# in the 'http', 'server', or 'location' context
aio threads=default;

You might see that switching from Linux to FreeBSD can be an alternative when dealing with slow IO. See the reference blog for deeper understanding.

Thread Pools in NGINX Boost Performance 9x! (www.nginx.com/blog)

Upvotes: 3

Related Questions