Reputation:
I had a look at the sources of FastCGI (fcgi-2.4.0) and actually there's no sign of fork. If I'm correct the web server spwans a process for FastCGI module (compiled in it or loaded as a SO/DLL) and handles control of the main socket (the port TCP:80 usually) over to it.
On *nix the FastCGI module "locks" that socket using a file write lock (libfcgi/os_unix.c:989) on the whole file descriptor (the listen socket indeed); this way when new connections income only the FastCGI module is able to process these. The incoming socket lock is released just before handing over to the HTTP req processing.
As seen as FastCGI module is not multi process/thread (no internal usage of fork/pthread_create) I assume the concurrent handling of multiple simultaneous connections is obtained through spwaning from the web server (through OS_SpawnChild) of n FastCGI module processes. If we spawn, as example, 3 FastCGI processes (Apache calls 3 x OS_SpawnChild), does that mean that we could only have max 3 requests served concurrently?
A) Is my vision of FastCGI way of working correct?
B) If the cost for the OS to spawn a new process/create a connection to local DB could be considered negligible, what are the advantages of FastCGI against an old fashioned executable approach?
Thanks, Ema! :-)
Upvotes: 8
Views: 5319
Reputation: 21
B, yes IF the cost of spawning is zero then legacy CGI would be pretty good. So if you don't have a lot of hits plain old CGI is fine, run with it. The point of fast cgi is doing things that benefit from a lot of persistent storage, or structures that have to be built BEFORE you can get your work done, like running queries against large databases, where you want to leave the DB libraries in memory instead of having to reload the whole shebang every time you want to run a query.
It matters when you have LOTS OF HITS.
Upvotes: 2
Reputation:
Indeed,
so as seen (A) is ok, now what about (B)? If I'm talking about executables (properly compiled C/C++ programs, not scripts like perl/php/...), and if we consider the process spwan cost and DB new connection cost negligible, then this approach (FastCGI) is just a sort of small gain compared to plain CGI executables?
I mean, given that Linux is very fast in spawning (forking) a process and if the DB is running locally (eg. same host MySQL), the time it takes to start a new executable and connect to the DB is practically 0. In this case, without nothing to be interpreted, only Apache C/C++ modules would be faster than this.
Using the FastCGI approach then you are even more vulnerable to mem leaks as seen as the process isn't forked/restarted every time...At this point, if you have to develop your CGI in C/C++ wouldn't be better to use old school CGI and/or Apache C/C++ modules directly?
Again, I'm not talking about scripts (perl/php/...), I'm talking about compiled CGI.
Thanks again, Cheers, Ema! :-)
Upvotes: 1
Reputation: 7775
The speed gain from FastCGI over normal CGI is that the processes are persistent. e.g. if you have any database handles to open, you can do them once. Same for any caching.
The main gain comes from not having to create a new php/perl/etc. interpreter each time, which takes a surprising amount of time.
If you wanted to have multiple concurrent connections handled, you do have to have multiple processes FastCGI Processes running. FastCGI is not a way of handling more connections through any kind of special concurrency. It is a way to speed up individual requests, which in turn will allow handling of more requests. But yes you are right, more concurrent requests requires more processes running.
Upvotes: 5
Reputation: 14084
FastCGI spawned processes are persistent, they're not killed once the request is handled, instead they're "pooled".
Upvotes: 4