Reputation: 43
We're using Celery 4.2.1 and Redis with global soft and hard timeouts set for our tasks. All of our custom tasks are designed to stay under the limits, but every day the builtin task backend_cleanup
task ends up forcibly killed by the timeouts.
I'd rather not have to raise our global timeout just to accommodate builtin Celery tasks. Is there a way to set the timeout of these builtin tasks directly?
I've had trouble finding any documentation on this or even anyone hitting the same problem.
Relevant source from celery/app/builtins.py
:
@connect_on_app_finalize
def add_backend_cleanup_task(app):
"""Task used to clean up expired results.
If the configured backend requires periodic cleanup this task is also
automatically configured to run every day at 4am (requires
:program:`celery beat` to be running).
"""
@app.task(name='celery.backend_cleanup', shared=False, lazy=False)
def backend_cleanup():
app.backend.cleanup()
return backend_cleanup
Upvotes: 3
Views: 1296
Reputation: 26
According to the celery doc https://docs.celeryq.dev/en/stable/userguide/configuration.html#std-setting-result_expires you can set varaible result_expires
in project settings (CELERY_RESULT_EXPIRES
for celery version < 4.0):
result_expires Default: Expire after 1 day.
Expected value: time (in seconds, or a timedelta object) for when after stored task tombstones will be deleted.
A built-in periodic task will delete the results after this time (celery.backend_cleanup), assuming that celery beat is enabled. The task runs daily at 4am.
A value of None or 0 means results will never expire (depending on backend specifications).
Upvotes: 0
Reputation: 51
You may set backend cleanup schedule directly in celery.py.
app.conf.beat_schedule = {
'backend_cleanup': {
'task': 'celery.backend_cleanup',
'schedule': 600, # 10 minutes
},
}
And then run the beat celery process:
celery -A YOUR_APP_NAME beat -l info --detach
Upvotes: 5