rodorgas
rodorgas

Reputation: 1092

Tasks being repeated in Celery

After a couple days, my celery service will repeat a task over and over indefinitely. This is somewhat difficult to reproduce, but happens regularly once a week or more frequently depending on the tasks volume being processed.

I will appreciate any tips on how to get more data about this issue, since I don't know how to trace it. When it occurs, restarting celery will solve it temporarily.

I have one celery node running with 4 workers (version 3.1.23). Broker and result backends are on Redis. I'm posting to one queue only and I don't use celery beat.

The config in Django's setting.py is:

BROKER_URL = 'redis://localhost:6380'
CELERY_RESULT_BACKEND = 'redis://localhost:6380'

Relevant part of the log:

[2016-05-28 10:37:21,957: INFO/MainProcess] Received task: painel.tasks.indicar_cliente[defc87bc-5dd5-4857-9e45-d2a43aeb2647]
[2016-05-28 11:37:58,005: INFO/MainProcess] Received task: painel.tasks.indicar_cliente[defc87bc-5dd5-4857-9e45-d2a43aeb2647]
[2016-05-28 13:37:59,147: INFO/MainProcess] Received task: painel.tasks.indicar_cliente[defc87bc-5dd5-4857-9e45-d2a43aeb2647]
...
[2016-05-30 09:27:47,136: INFO/MainProcess] Task painel.tasks.indicar_cliente[defc87bc-5dd5-4857-9e45-d2a43aeb2647] succeeded in 53.33468166703824s: None
[2016-05-30 09:43:08,317: INFO/MainProcess] Task painel.tasks.indicar_cliente[defc87bc-5dd5-4857-9e45-d2a43aeb2647] succeeded in 466.0324719119817s: None
[2016-05-30 09:57:25,550: INFO/MainProcess] Task painel.tasks.indicar_cliente[defc87bc-5dd5-4857-9e45-d2a43aeb2647] succeeded in 642.7634702899959s: None

Tasks are sent by user request with:

tasks.indicar_cliente.delay(indicacao_db.id)

Here's the source code of the task and the celery service configuration.

Why are the tasks being received multiple times after some time the service is running? How can I get a consistent behavior?

Upvotes: 8

Views: 4633

Answers (3)

Andrii Rusanov
Andrii Rusanov

Reputation: 4606

It might be a bit out of date, but I've faced the same problem and fixed it with Redis. Long story short, Celery waits for some time for tasks execution, and if the time has been expired it restarts the task. It is called visibility timeout. The explanation from the docs:

If a task isn’t acknowledged within the Visibility Timeout the task will be redelivered to another worker and executed. This causes problems with ETA/countdown/retry tasks where the time to execute exceeds the visibility timeout; in fact if that happens it will be executed again, and again in a loop. So you have to increase the visibility timeout to match the time of the longest ETA you’re planning to use. Note that Celery will redeliver messages at worker shutdown, so having a long visibility timeout will only delay the redelivery of ‘lost’ tasks in the event of a power failure or forcefully terminated workers.

Example of the option: https://docs.celeryproject.org/en/stable/userguide/configuration.html#broker-transport-options

Details: https://docs.celeryq.dev/en/stable/getting-started/backends-and-brokers/redis.html#id1

Upvotes: 8

James Mishra
James Mishra

Reputation: 4510

I ran into an issue like this. Raising the Celery visibility timeout was not working.

It turns out that I was also running a Prometheus exporter that instantiated its own Celery object that used the default visibility timeout--therefore canceling out the higher timeout I had placed in my application.

If you have multiple Celery clients--whether they are for submitting tasks, processing tasks, or just observing tasks--make sure that they all have the exact same configuration.

Upvotes: 0

rodorgas
rodorgas

Reputation: 1092

Solved by using rabbitmq broker instead of redis.

Upvotes: 1

Related Questions