Reputation: 526
I've configured a queuing on Laravel 5.4 using the "beanstalkd" queue driver ... I deployed it on CentOS 7 (cPanel) and installed Supervisor... but I've two main problems
In the logs, I found this exception "local.ERROR: exception 'PDOException' with message 'SQLSTATE[42S02]: Base table or view not found: 1146 Table '{dbname}.failed_jobs' doesn't exist" So Question #1 is .. Should I configure any database tables for "beanstalkd" queue driver, If so could you please state these tables structure?
Also I've configured the queue:work command in the Supervisor config file as following
[program:test-queue] process_name=%(program_name)s_%(process_num)02d command=php /home/****/****/artisan queue:work beanstalkd --sleep=3 --tries=3 autostart=true autorestart=true user=gcarpet numprocs=8 redirect_stderr=true stdout_logfile= /home/*****/*****/storage/logs/supervisor.log
I found that the supervisor.log contained multiple calls for the job even after the first call was "Processed" .. Question #2 I dispatched the job once but the job was pushed in to the queue several times, I need a solution for this problem I don't want the same job to pushed multiple times in the queue?
[2019-05-14 09:08:15] Processing: App\Jobs\{JobName} [2019-05-14 09:08:15] Processing: App\Jobs\{JobName} [2019-05-14 09:08:15] Failed: App\Jobs\{JobName} [2019-05-14 09:08:24] Processed: App\Jobs\{JobName} [2019-05-14 09:08:24] Processing: App\Jobs\{JobName} [2019-05-14 09:08:33] Processed: App\Jobs\{JobName} [2019-05-14 09:08:33] Processing: App\Jobs\{JobName} [2019-05-14 09:08:41] Processed: App\Jobs\{JobName} [2019-05-14 09:08:41] Processing: App\Jobs\{JobName} [2019-05-14 09:08:41] Failed: App\Jobs\{JobName}
Upvotes: 4
Views: 2457
Reputation: 3351
php artisan queue:failed-table
php artisan migrate
This behaviour is specified by the 'tries' option that either your queue worker receives on the command line
php artisan queue:work --tries=3
...or the tries
property of the specific job.
<?php
namespace App\Jobs;
class Reader implements ShouldQueue
{
public $tries = 5;
}
You currently are seeing that jobs retry 3 times, then fail.
Check your logging output and the failed_jobs
table to see what exceptions have been thrown and fix those appropriately.
A job is retried whenever the handle
method throws.
After a couple of retried, the job will fail
and the failed()
method will be invoked.
Failed jobs will be stored in the failed_jobs
table for later reference or manual retrying.
The --timeout value should always be at least several seconds shorter than your retry_after configuration value. This will ensure that a worker processing a given job is always killed before the job is retried. If your --timeout option is longer than your retry_after configuration value, your jobs may be processed twice.
See, Job Expirations & Timeouts.
Upvotes: 2