Reputation: 1299
I have a script that runs a large number (around 100) of php files simultaneously. I noticed that the time for each script to start was often delayed a lot. When firing all scripts at once some scripts would only start after 15 seconds.
I use the below code to simultaneously run those scripts (this code is working without any errors). Please note that the 100 php files are on the same server as the script that calls them simultaneously.
function my_curl ($i, $url, $post_data, $return, $auth='', $proxy='', $timeout=5) {
$curl="/usr/bin/curl -s";
if ($post_data!='') $curl.=" --data '$post_data'";
if ($timeout!='') $curl.=" --connect-timeout $timeout";
if ($auth!='') {
list($user,$pass)=explode(':',$auth);
$curl.=" --basic --user $user:$pass";
}
if ($proxy!='') {
list($proxy_ip,$proxy_port,$proxy_user,$proxy_pass)=explode(':',$proxy);
$curl.=" --proxy $proxy_ip:$proxy_port --proxy-user $proxy_user:$proxy_pass";
}
if ($return==0) {
$curl.=" $url >/dev/null 2>/dev/null &";
} else {
$curl.=" $url >return-$i 2>/dev/null &";
}
return("$curl\n");
}
$to_run='';
for ($i=0; $i<$max; $i++) {
$url='http://www.some_url_to_the_same_server.com/post.php';
$post_data="var=web-$i";
$to_run.=my_curl($i, $url, $post_data, $return, $auth, $proxy, $timeout);
}
file_put_contents(realpath($_SERVER["DOCUMENT_ROOT"]).'/run_data/runner.sh',$to_run);
shell_exec('/bin/bash '.realpath($_SERVER["DOCUMENT_ROOT"]).'/run_data/runner.sh');
I have a 3CPU linux centos server. I know that I should not have more than 12 procs r for my system, however it is a lot more (see below). When I run the above script and fire 100 php scripts at once I see the below vmstat.
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
30 0 60 90388 14428 223996 0 0 20 10 3 10 0 0 99 1 0
When the system is in "rest" I get the below output:
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 60 155064 14672 223952 0 0 20 10 3 10 0 0 99 1 0
As procs r shows that there is a CPU overload I don't understand how the CPU shows 99% idle and procs r very high at the same time.
What can I do to improve the performance of my system such that all the 100 scripts are fired at once?
Your help is very appreciated.
UPDATE 1:
This is the relevant part of my httpd.conf file:
User apache
Group apache
KeepAlive On
KeepAliveTimeout 30
ServerAdmin admin@localhost
DocumentRoot "/var/www/html"
MaxClients 50
MaxRequestsPerChild 50
StartServers 5
MinSpareServers 5
MaxSpareServers 20
MaxKeepAliveRequests 50
Upvotes: 0
Views: 1707
Reputation: 3260
For each of these curl calls you make separate HTTP request to the server. With MaxClients
directive set to 50
only so many requests can be processed at a time.
Possible solutions:
1) If you don need process output, don't wait for it:
pclose(popen('/path/to/executable', 'r'));
2) Don't use sub-requests if not absolutely necessary - try to rewrite it to CGI scripts, that can be run directly instead of making sub-requests.
3) Increase maximum number of connections that will be processed simultaneously in apache conf:
MaxClients 256
Upvotes: 1
Reputation: 6718
I don't know for sure, but i see three possibilities:
sh
script waits while earlier commands processes before sending otherCURL
issue. May be it can only handle some amount of queries at a time?You can have some ideas by viewing top
or ps auxw
commands output, apache
log and see how they are executed. Also, you can add some sample output after every single curl
command to see if they are running in one time. If they are realy running at the same time, it is a web server issue.
Upvotes: 1