Reputation: 920
I've been learning about celery and haven't been able to find the answer to a conceptual question and have had odd results experimenting.
When there are scheduled tasks (by scheduled, I don't mean periodic but scheduled to run in the future using eta=x) submitted to Celery, they seem to be consumed from the queue by a worker right away (rather than staying in the Redis default celery key/queue). Presumably, the worker will actually execute the tasks at eta.
What happens if that worker were to be shut down or restarted (to update it's registered tasks for example)? Would those scheduled tasks be lost? They are not "running" so a warm terminate wouldn't wait for them to finish of course.
Is there a way to force those tasks to be return to the queue and consumed by the next available worker?
I suppose, manually, one could dump the tasks before shutting down a worker:
http://celery.readthedocs.org/en/latest/userguide/workers.html#inspecting-workers
and resubmit them when a new worker is back up... but is this supposed to happen automatically?
Would really appreciate any help with this
Thanks
Upvotes: 10
Views: 4571
Reputation: 47
Update: Celery 5.1
Workers will acknowledge the message even if acks_late
is enabled. This is the default and intentional setting set forth by the library. [Ref]
To change the default settings and re-queue your unfinished tasks, you can use the task_reject_on_worker_lost
config. [Ref]
Although keep in mind that this could lead to a message loop and can cause unintended effects if your tasks are not idempotent.
Specifically for eta tasks, queues wait for workers to acknowledge the tasks before deleting them. With default settings, celery workers ack right before the task is executed and with acks_late
when the task is finished executing.
So when workers fail to ack the tasks probably because of shutdown/restart/lost_connection or in case of Redis/SQS visibility_timeout
exceeded [ref], the queue will redeliver the message to any available worker.
Upvotes: 1
Reputation: 199
Take a look at acks_late http://celery.readthedocs.org/en/latest/reference/celery.app.task.html#celery.app.task.Task.acks_late
If set to true Celery will keep the task in the queue until it has been successfully executed.
Upvotes: 4