Reputation: 898
I am using Celery with supervisor running the workers and Redis as the broker, and I'm having an issue with a Celery worker apparently freezing up, making it unable to process any more tasks and causing its task queue in Redis to fill up to the point of causing some memory issues. I tried setting the expires
option when I called the task, thinking that this would take advantage of Redis' support for key expiry:
some_task.apply_async(args=('foo',), expires=60)
but this didn't work, and when I inspected the corresponding list in the Redis CLI, it just kept expanding – perhaps unsurprisingly, because it sounds like list expiry is not built-in functionality in Redis. The Celery docs say that the expiration time corresponds to time after "publishing" the task, but I couldn't find any mention of what "publishing" actually means. I had assumed it referred to adding the task to the Redis list, so either that presumption is wrong or something else is going on that I don't understand (or both).
Am I wrong about task expiry time? And if so, is there any way to cause the messages to expire within Redis?
Upvotes: 8
Views: 13614
Reputation: 4454
The context is more puzzling than the question. You are able to use redis-cli and you inspected the redis keys. In redis-cli, you can type ttl sexykey
and you should have seen the remaining seconds or not if the key was set to expire when written there by Celery and thus answer that particular uncertainty on your part regarding this matter.
First, let us be clear there is a message broker
. Second, there is a result backend
. Celery only has a very few message brokers
but many result backends
. The list of brokers is here.The list of supported backends is at page 10 (as of 2018-Mar-24) under Transport and Backends section here. It is the result backend
that I assume would fill-up because this is what I am seeing too.
Celery can use the same Redis instance both as a message broker
and a result backend
. Celery stores as a Redis key the result of an executed task regardless of wither the task succeeds or not and this redis key has a default expiration of 1 day (86400 seconds). So if you have many function calls executed by Celery, then your Redis in-memory cache would fill-up because the key expiration of 86400 seconds will not catch-up with the incoming recording of tasks results.
To shorten the key expiration to 60 seconds, here is the python snippet:
app = Celery('justdoit',
broker='redis://172.17.0.2',
backend='redis://172.17.0.2')
app.conf.result_expires = 60
PS: I was just learning Celery a few hours ago and I immediately recognized (before it happens) this very same Redis filling-up scenario as described. I have been using Redis for a year so I know some of its characteristics.
Upvotes: 17