Reputation: 3900
I have a Laravel queued job which extracts links from a webpage. The timeout for the Queue listener configured through Laravel Forge is 240 seconds (4 minutes). However, jobs are taking up to 45 minutes to run.
My queue settings are:
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => 350,
],
And there are multiple job processes running - up to 35 processes. As you can imagine, this is eating up a lot of server memory. The processes just seem to hanging around. The command for these processes as shown in top
is:
php7.1 artisan queue:work redis --once --queue=linkqueue --delay=0 --memory=128 --sleep=10 --tries=1 --env=local
How can a job run for 45 minutes if the timeout is 240 seconds? Why are there so many processes - shouldn't there just be one?
Also, any ideas why a script for extracting links should take 45 minutes to run?!
The script does work, that is, in most cases it runs as expected - it just takes ages. There are no errors reported/logged as far as I can see.
Code in the job is:
$dom = new DOMDocument;
$dom->loadHTML($html);
$links = $dom->getElementsByTagName('a');
foreach ($links as $a) {
$link = $a->getAttribute('href');
$newurl = new URL;
$newurl->url = $link;
$newurl->save();
}
Update: Another simple job runs just fine, in under a second. It is specifically just the link job above that is taking 10s of minutes. Could it be a RAM issue or something? Is there anything else I can do to diagnose the problem? When run as part of a console job, the extract links function itself runs in 1 or 2 seconds. It is only on the queue that it freaks out.
Upvotes: 3
Views: 4263
Reputation: 488
You can also do the following to make timeouts unlimited.
php artisan queue:listen --timeout=0
Upvotes: -2
Reputation: 60040
How can a job run for 45 minutes if the timeout is 240 seconds?
Because you have 'retry_after' => 350,
on your queue connection. This means if Laravel does not hear from the job after 350
seconds - it assumes the job has failed and retries again. This is resulting in multiple processes of the one job in your situation.
If you are happy to allow your jobs to run for up to 45mins - then you should set retry_after
to be a larger number. Say 3600
which is 1 hour.
That way a job will only start if it takes longer than 1 hour to run.
Upvotes: 5