rodrigo-silveira
rodrigo-silveira

Reputation: 13088

Load balancing PHP built-in server?

My development environment consists of the single-threaded built-in PHP server. Works great:

APP_ENV=dev php -S localhost:8080 -c php.ini web/index.php

One issue with this is that the built-in server is single-threaded. This makes lots of parallel XHRs resolve sequentially. Worst of all, it doesn't mimic our production environment very well. Some front-end issues with concurrency simply don't exist in this set up.

My question:

What's an existing solution that I could leverage that would proxy requests asynchronously to multiple instances of the same PHP built-in server?

For example, I'd have a few terminal sessions running the built-in server on different ports, then each request is routed to a different one of those instances. In other words, I want multiple instances of my application running in parallel using the simplest possible set up (no Apache or Nginx if possible).

Upvotes: 9

Views: 2026

Answers (2)

bishop
bishop

Reputation: 39474

A super-server, like inetd or tcpserver, works well. I'm a fan of the latter:

tcpserver waits for incoming connections and, for each connection, runs a program of your choice.

With that, now you want to use a reverse proxy to pull the HTTP protocol off the wire and then hand it over to a connection-specific PHP server. Pretty simple:

$ cat proxy-to-php-server.sh
#!/bin/bash -x

# get a random port -- this could be improved
port=$(shuf -i 2048-65000 -n 1)

# start the PHP server in the background
php -S localhost:"${port}" -t "$(realpath ${1:?Missing path to serve})" &
pid=$!
sleep 1

# proxy standard in to nc on that port
nc localhost "${port}"

# kill the server we started
kill "${pid}"

Ok, now you're all set. Start listening on your main port:

tcpserver -v -1 0 8080 ./proxy-to-php-server.sh ./path/to/your/code/

In English, this is what happens:

  • tcpserver starts listening on all interfaces at port 8080 (0 8080) and prints debug information on startup and each connection (-v -1)
  • For each connection on that port, tcpserver spawns the proxy helper, serving the given code path (path/to/your/code/). Pro tip: make this an absolute path.
  • The proxy script starts a purpose-built PHP web server on a random port. (This could be improved: script doesn't check if port is in use.)
  • Then the proxy script passes its standard input (coming from the connection tcpserver serves) to the purpose-built server
  • The conversation happens, then the proxy script kills the purpose-built server

This should get you in the ballpark. I've not tested it extensively. (Only on GNU/Linux, Centos 6 specifically.) You'll need to tweak the proxy's invocation of the built-in PHP server to match your use case.

Note that this isn't a "load balancing" server, strictly: it's just a parallel ephemeral server. Don't expect too much production quality out of it!


To install tcpserver:

$ curl -sS http://cr.yp.to/ucspi-tcp/ucspi-tcp-0.88.tar.gz | tar xzf -
$ cd ucspi-tcp-0.88/
$ curl -sS http://www.qmail.org/moni.csi.hu/pub/glibc-2.3.1/ucspi-tcp-0.88.errno.patch | patch -Np1
$ sed -i 's|/usr/local|/usr|' conf-home
$ make
$ sudo make setup check

Upvotes: 7

James K
James K

Reputation: 617

I'm going to agree that replicating a virtual copy of your production environment is your best bet. You don't just want to cause issues, you want to cause yourself the same issues. Also, there's little guarantee that you will hit all of the same issues under the alternate setup.

If you do want to do this, however, you don't have particularly many options. Either you direct incoming requests to an intermediate piece of software which then dispatches them to the php backends -- this would be the Apache, Nginx solutions -- or you don't, and the request is directly handled by the single php thread.

If you're not willing to use that interposed software, there's only one layer between you and the client: networking. You could, in theory, set up a round-robin DNS for yourself. You give yourself multiple IPs, load up a PHP server listening on each, and then let your client connections get spread across them. Note that this would assign each client to a specific process -- which may not be the level of parallel you're looking for.

Upvotes: 3

Related Questions