Ben in CA
Ben in CA

Reputation: 851

Ability to set TTL on Redis Queue using Bull?

I have an application using Bull for a queue. Is there a parameter that I can pass it to set a TTL (time to live) for each entry automatically when it's created?

const Queue = require('bull')
const webApiQueue = new Queue('webApi', {redis: REDIS_URL })

// Producer
const webApiProducer = (data) => {
  webApiQueue.add(data, { lifo: true })
}

If setting a key with Redis directly, you an use setex key_name 10000 key_data

But how can I implement such in Bull? It's just an API processing queue, and I want it to delete entries after 24hrs automatically.

I'm not seeing anything in the documentation: https://github.com/OptimalBits/bull#documentation

Upvotes: 1

Views: 3280

Answers (1)

rinogo
rinogo

Reputation: 9163

From what I gather, it seems like explicitly setting a TTL (e.g. 24 hours) on the Redis keys is not the recommended way to solve this.

It seems like the canonical approach is to only clear keys when necessary (e.g. when we run out of memory).

This Bull Issue pointed me in the right direction.

If you'd like to have Bull manage its memory a little more, ahem, reasonably, try specifying removeOnComplete and removeOnFail as discussed in the documentation (note that both default to false).

A totally different approach would be to solve the memory management issue with your Redis configuration by setting the maxmemory-policy to allkeys-lru as discussed in the Redis docs.

If you're using AWS ElastiCache instead, Amazon has some documentation on these same techniques. ElastiCache uses a maxmemory-policy of volatile-lfu by default which will cause memory issues with Bull since Bull doesn't set TTLs. I'd recommend changing this to allkeys-lru.

For what it's worth, my guess is that the most performant solution is to modify maxmemory-policy in the Redis/ElastiCache configuration. That way, Redis itself is managing keys instead of Bull adding overhead for completed/failed job removal.

Upvotes: 5

Related Questions