Fadi Sharif
Fadi Sharif

Reputation: 310

Max Attempts Exceeded Exception queue laravel

I have created an application to send e-mails to more than one user but I am facing a problem when dealing with a large number of recipients.

The error appears in a failed_jobs table

Illuminate\Queue\MaxAttemptsExceededException:
App\Jobs\ESender has been attempted too many times or run too long.
The job may have previously timed out.
in D:\EmailSender\vendor\laravel\framework\src\Illuminate\Queue\Worker.php:649

and this is payload in failed_jobs table

{
   "uuid":"ff988083-c1da-4d20-a2e3-c2a10e154c79",
   "timeout":9000,
   "id":"j2Lz0Ro0bkJpqwxKWTxC3Tiii71iE6Cm",
   "data":{
      "command":"O:16:\"App\\Jobs\\ESender\":13:{s:7:\"timeout\";i:9000;s:12:\"receiver_obj\";O:45:\"Illuminate\\Contracts\\Database\\ModelIdentifier\":4:{s:5:\"class\";s:12:\"App\\Receiver\";s:2:\"id\";i:6;s:9:\"relations\";a:0:{}s:10:\"connection\";s:5:\"mysql\";}s:16:\"sender_all_hosts\";O:45:\"Illuminate\\Contracts\\Database\\ModelIdentifier\":4:{s:5:\"class\";s:15:\"App\\SenderHosts\";s:2:\"id\";a:4:{i:0;i:1;i:1;i:2;i:2;i:3;i:3;i:4;}s:9:\"relations\";a:0:{}s:10:\"connection\";s:5:\"mysql\";}s:11:\"message_obj\";O:45:\"Illuminate\\Contracts\\Database\\ModelIdentifier\":4:{s:5:\"class\";s:12:\"App\\Messages\";s:2:\"id\";i:36;s:9:\"relations\";a:0:{}s:10:\"connection\";s:5:\"mysql\";}s:7:\"counter\";i:1;s:3:\"job\";N;s:10:\"connection\";N;s:5:\"queue\";N;s:15:\"chainConnection\";N;s:10:\"chainQueue\";N;s:5:\"delay\";N;s:10:\"middleware\";a:0:{}s:7:\"chained\";a:0:{}}",
      "commandName":"App\\Jobs\\ESender"
   },
   "displayName":"App\\Jobs\\ESender",
   "timeoutAt":1594841911,
   "maxExceptions":null,
   "maxTries":null,
   "job":"Illuminate\\Queue\\CallQueuedHandler@call",
   "delay":null,
   "attempts":1
}

see the cmd error here.

parts of code:

#1

class ESender implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;


    /**
     * The number of times the job may be attempted.
     *
     * @var int
     */
    public $tries = 100;

    /**
     * The number of seconds the job can run before timing out.
     *
     * @var int
     */
    public $timeout = 9999999;

     ...more code...
}

#2

public function handle(){
    Redis::throttle('key')->allow(1)->every(20)->then(function () {
         //send email
           ..... more code .....

        }, function () {
            // Could not obtain lock...
            return $this->release(10);
        });
    }

and this is my configuration:

queue.php:

'redis' => [
            'driver' => 'redis',
            'connection' => 'default',
            'queue' => env('REDIS_QUEUE', 'default'),
            'retry_after' => 9000,
            'block_for' => null,
        ],

.env

BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=database
SESSION_DRIVER=file
SESSION_LIFETIME=300
REDIS_CLIENT = predis
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
QUEUE_DRIVER=database

Upvotes: 12

Views: 39544

Answers (4)

Fadi Sharif
Fadi Sharif

Reputation: 310

Just increase the timeout if you want ,But be careful not to occupy server resources for long periods

php artisan queue:work --timeout=10000000

Upvotes: 6

COgroup Digital
COgroup Digital

Reputation: 19

Run

php artisan config:clear
php artisan optimization:clear

restart supervisor

Upvotes: 1

Danny Ebbers
Danny Ebbers

Reputation: 919

The command, that runs your queue worker needs --tries= and --timeout= to set the out limits your queue worker to allow.

This makes sure that your commands cannot go beyond the limits of your defined workers.

You can use the job properties to achieve timeout or tries, below. And use queue configuration file to set a default.

Upvotes: 2

Maarten Veerman
Maarten Veerman

Reputation: 1621

You set a timeout in your job, but this timeout is larger than the value in retry_after which you have defined in the this config.

See https://laravel.com/docs/7.x/queues#job-expirations-and-timeouts

There is a clear warning:

The --timeout value should always be at least several seconds shorter than your retry_after configuration value. This will ensure that a worker processing a given job is always killed before the job is retried. If your --timeout option is longer than your retry_after configuration value, your jobs may be processed twice.

You could define a new connection for long running jobs, and set this connection on the job (dispatch to specific connection), instead of using the timeout.

Upvotes: 8

Related Questions