Reputation: 31654
So I have an interesting situation and I'm not sure how to get around it.
We have a process that takes an aggregate look at our sales data and then builds a graph via Google Chart. To keep from hammering the database (or Google for that matter) it only runs this every 30 minutes. It looks at the previous file to determine if it's hit the 30 minute threshold and, if so, it builds the new file. We have two servers behind a load balancer. So both servers have to have access to the same location to store this file. We did this using an EBS share, mounted via NFS (our entire setup is in AWS). This process works just fine.
The problem is sometimes the EBS share is slow or disconnected. This causes a ripple effect then, throughout our internal tools waiting on this one file to process (unless you turn the notice off). I've read a few threads (like this one) that talk about stream_set_timeout but it's not clear how you would use that with loading a file (it's not exactly a stream) and I've been unable to find any examples.
Here's a snippet to give you an idea what the PHP file is doing
$file = '/ebs/path/to/image.png';
$newfile=false;
if(!is_file($file)) $newfile=true;
elseif(filemtime($file)+1800 < time()) {
$newfile=true;
unlink($file);
}
if(!$newfile){
$i=imagecreatefrompng($file);
header('Content-Type: image/png');
header('Expires: ' . gmdate('D, d M Y H:i:s', filemtime($file) + $timeout) . ' GMT');
imagepng($i);
imagedestroy($i);
exit;
} else {
// Build and output a new file here
}
How could I create a timeout for this script?
Upvotes: 4
Views: 350
Reputation: 31491
This is more of a solution to the problem than an answer to the question, but please post the output of iostat -x 1
both when the server is running fine and when the file takes long to load. I've found that when the avgqu-sz
goes above 30, even on IOPS-provisioned drives (ours are provisioned at 2000 IOPS) then EBS slows to a crawl. The solution is to try not to read or write to it for a second or three until the situation clears up.
Also, try cloning the volume and using the clone in place of the original volume. I've found that some AWS systems are just 'bad' and need to be replaced: this goes for EC2 instances, EBS volumes, RDS services, and others. Possibly that happens when luck brings you to use hardware running a VM with another user who may not be a good neighbour. Just cloning the resource and restarting it usually moves it to different hardware and resolves issues like this.
EDIT: Read this great post about how to interpret iostat
output, with an emphasis on how iostat
pertains to EBS. I've read that a good dozen times.
Upvotes: 5