Daniel
Daniel

Reputation: 129

Multiple simultaneous requests to PHP script with NGINX

I have a webserver running NGINX & PHP, with a very basic multi client test.

<?php
    if(isset($_GET['query'])) {
        echo "HELLO MY NAME IS WEBSERVER";
    }
    if(isset($_GET['sleep'])) {
        sleep(10);
    }
?>

If I run http://servername.com/index.php?query, I get an instant response.

If I run ?sleep then ?query together, ?query appears to be queued till ?sleep is complete.

This happens across multiple clients. Client A can request ?sleep, which will affect Client B's ?query request. Client B is a completely different machine.

Is there any method of tweaking php.ini or my nginx config to allow a separate php worker process to spawn (or something along those lines?)

Edit: For a little background, here's my config.

nginx.conf:

    location ~ \.php$ {
            fastcgi_pass   127.0.0.1:9123;
            fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
            include        fastcgi_params;
    }

fastgci_params:

fastcgi_param  QUERY_STRING       $query_string;
fastcgi_param  REQUEST_METHOD     $request_method;
fastcgi_param  CONTENT_TYPE       $content_type;
fastcgi_param  CONTENT_LENGTH     $content_length;

fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
fastcgi_param  REQUEST_URI        $request_uri;
fastcgi_param  DOCUMENT_URI       $document_uri;
fastcgi_param  DOCUMENT_ROOT      $document_root;
fastcgi_param  SERVER_PROTOCOL    $server_protocol;
fastcgi_param  REQUEST_SCHEME     $scheme;
fastcgi_param  HTTPS              $https if_not_empty;

fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
fastcgi_param  SERVER_SOFTWARE    nginx/$nginx_version;

fastcgi_param  REMOTE_ADDR        $remote_addr;
fastcgi_param  REMOTE_PORT        $remote_port;
fastcgi_param  SERVER_ADDR        $server_addr;
fastcgi_param  SERVER_PORT        $server_port;
fastcgi_param  SERVER_NAME        $server_name;

# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param  REDIRECT_STATUS    200;

php execution (runphp.bat):

set PATH=%cd%\php;%PATH%
start %cd%\php\php-cgi.exe -b 127.0.0.1:9123

edit 2: Ok, so it appears I need PHP-FPM, which is not available on windows:

It is important to note that FPM is not built with the windows binaries.  Many of the guides you may find online rely on php-cgi.exe.  Unfortunately they call it FPM but this is incorrect!

The executable php-cgi.exe that is bundled with the windows binaries is a FastCGI interface but it is *not* FPM (Fastcgi Process Manager).  php-cgi.exe does not have multi-threading or concurrent request support, nor support for any of the FPM configuration options.

So, as a workaround, I'm trying the multiple php servers / processes approach:

upstream php {
    server  127.0.0.1:9000;
    server  127.0.0.1:9001;
    server  127.0.0.1:9002;
    server  127.0.0.1:9003;
}

location ~ \.php$ {
    fastcgi_pass   php;
    fastcgi_index  index.php;
    fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
    include        fastcgi_params;
}

However, NGINX will not start at all in this configuration. It doesn't seem to want to accept any "upstream php {}"

Any ideas?

Thanks

Upvotes: 3

Views: 6357

Answers (3)

xianyun
xianyun

Reputation: 1

https://github.com/deemru/php-cgi-spawner

php-cgi-spawner is the smallest and easiest application to spawn a multiple processes in Windows for your web server with FastCGI.php-cgi

Upvotes: -1

Daniel
Daniel

Reputation: 129

As per the edits, I figured PHP-FPM isn't available in Windows. However, this can be bypassed by spawning multiple PHP processes on different ports, and configuring NGINX to load balance across them.

My "RunPHP.bat" script:

set PATH=%cd%\php;%PATH%
runhiddenconsole.exe %cd%\php\php-cgi.exe -b 127.0.0.1:9100
runhiddenconsole.exe %cd%\php\php-cgi.exe -b 127.0.0.1:9101
runhiddenconsole.exe %cd%\php\php-cgi.exe -b 127.0.0.1:9102
runhiddenconsole.exe %cd%\php\php-cgi.exe -b 127.0.0.1:9103

My nginx.conf (php bits only):

http {

    upstream php_farm {
        server 127.0.0.1:9100 weight=1 max_fails=1 fail_timeout=1s;
        server 127.0.0.1:9101 weight=1 max_fails=1 fail_timeout=1s;
        server 127.0.0.1:9102 weight=1 max_fails=1 fail_timeout=1s;
        server 127.0.0.1:9103 weight=1 max_fails=1 fail_timeout=1s;
    }

    server {
        location ~ \.php$ {
                fastcgi_pass   php_farm;
                fastcgi_index  index.php;
                fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
                include        fastcgi_params;

    }

}

Upvotes: 6

michail_w
michail_w

Reputation: 4471

It looks you misunderstand the concept of request flow along HTTP/Nginx/PHP. let me explain it:

  1. HTTP is stateless protocol. In your case you need to know that there's no chance to send some content to client, wait some time (sleep), and send again some content.
  2. If you have issues with requests that blocks other requests, you need to spawn more PHP-FPM workers which will handle many simultaneous requests.
  3. There is a way to send some content to the client, sleep, and run some other PHP code. It requires to send a special event which will close connection between client and server, but the PHP worker will not finish it's work. This is how Symfony does it's background tasks after response is send to the client.

For now you need to tweak your config. Take carry of these two parameters of Nginx config:

worker_processes  1;
worker_connections  1024;

First one shows how many Nginx workers are running, second one defines how many connections it can try to handle per one second (basically).

After, please do some tweaks around PHP-FPM config. Look at these params:

pm = dynamic; allows FPM to manipulate number of FPM workers
pm.max_children = 5; defines how many workers run at maximum state
pm.start_servers = 3; defines how many workers run at minimum state
pm.min_spare_servers = 2; defines how many workers run as idle on minimum
pm.max_spare_servers = 4; defines how many workers run as idle on maximum
pm.max_requests = 200; defines how many requests per second should be handled by one worker

Basically that's all you need. Now you have to experiment with all those params to find the best configuration for your case.

Upvotes: -1

Related Questions