PeterInvincible
PeterInvincible

Reputation: 2310

Laravel queues getting "killed"

Sometimes when I'm sending over a large dataset to a Job, my queue worker exits abruptly.

// $taskmetas is an array with other arrays, each subsequent array having 90 properties.
$this->dispatch(new ProcessExcelData($excel_data, $taskmetas, $iteration, $storage_path));

The ProcessExcelData job class creates an excel file using the box/spout package.

1st example - queue output with a small dataset:

forge@user:~/myapp.com$ php artisan queue:work --tries=1
[2017-08-07 02:44:48] Processing: App\Jobs\ProcessExcelData
[2017-08-07 02:44:48] Processed:  App\Jobs\ProcessExcelData

2nd example - queue output with a large dataset:

forge@user:~/myapp.com$ php artisan queue:work --tries=1
[2017-08-07 03:18:47] Processing: App\Jobs\ProcessExcelData
Killed

I don't get any error messages, logs are empty, and the job doesn't appear in the failed_jobs table as with other errors. The time limit is set to 1 hour, and the memory limit to 2GBs.

Why are my queues abruptly quitting?

Upvotes: 15

Views: 31096

Answers (5)

Shamsul Haque
Shamsul Haque

Reputation: 479

By default laravel has 90 sec of timeout that can be seen in documentation.

In my case: I tried with customising retry_after value in config/queue.php but didn't work. And than tried its alternative mentioned in the documentation timeout option with queue:work artisan command and that worked. so working artisan command is:

php artisan queue:work --timeout=900

In the above command I've increased the timeout to 15 min.

Upvotes: 4

SpinyMan
SpinyMan

Reputation: 484

Sometimes you work with resource-intensive processes like image converting or BIG excel file creating/parsing. And timeout option is not enough for this. You can set public $timeout = 0; in your job but it still killed because of memory(!). By default memory limit is 128 MB. To fix it just add --memory=256 (or heigher) option to avoid this problem.

BTW:

The time limit is set to 1 hour, and the memory limit to 2GBs

This applying only for php-fpm in your case but not for queue process worker.

Upvotes: 1

Mhmd
Mhmd

Reputation: 476

I know this is not what you are looking for. but i have same problem and i think it's happen bcs of OS ( i will change it if i found the exact reason ) but lets check

queue:listen

instead of

queue:work

the main different between this two is that the queue:listen run Job class codes per job ( so you dont need to restart your workers or supervisor) but the queue:work use cache system and work very faster than queue:listen and OS can not handle this speed and prepare queue connection ( in my case Redis )

queue:listen command will run queue:work in it self ( you can check this from your running process in htop or .. )

But the reason of telling you to check queue:listen command , bcs of the speed . OS can work easily with this speed and have no problem to handle your queue connection and ... ( in my case there is no silent kill any more )

to know if you have my problem , you can change your queue driver to "sync" from .env and see if it's kill again or not - if it's not killed , you can know that the problem is on preparing queue connection for use

  • to know if you have memory problem run your queue with listen method or sync and php will return an Error for that, then you can increase your memory to test it again

  • you can use this code to give more memory for testing in your code

    ini_set('memory_limit', '1G');//1 GIGABYTE
    

Upvotes: 3

Ryan
Ryan

Reputation: 24035

This worked for me:

I had a Supervisord job:

Job ID, Queue, Processes, Timeout, Sleep Time, Tries, Action Job_1,
Default, 1, 60, 3, 3

https://laravel.com/docs/5.6/queues#retrying-failed-jobs says:

To delete all of your failed jobs, you may use the queue:flush command:

php artisan queue:flush

So I did that (after running php artisan queue:failed to see that there were failed jobs).

Then I deleted my Supervisord job and created a new one like it but with 360 second timeout.

Also important to remember was restarting the Supervisord job (within the control panel of my Cloudways app) and restarting the entire Supervisord process (within the control panel of my Cloudways server).

After trying to run my job again, I noticed it in the failed_jobs table and read that the exception was related to cache file permissions, so I clicked the Reset Permission button in my Cloudways dashboard for my app.

Upvotes: 0

Smruti Ranjan
Smruti Ranjan

Reputation: 302

You can try with giving a timeout. For eg. php artisan queue:work --timeout=120

By default, the timeout is 60 seconds, so we forcefully override the timeout as mentioned above

Upvotes: 19

Related Questions