Derek
Derek

Reputation: 1125

Redis: Transaction with multiple keys

I'm using Spring Data Redis. In Redis the basic data models are

job: Hash that contains a job's data.

queue: List that contains job ids serving as a queue.

New job will be saved in job hash and it will be pushed to queue. We have multiple worker clients pooling the queue to consume new job id by popping the id and read details from hash.

Now I'm trying to work out a new feature that certain worker can only consume certain jobs, based on some flags within job data. The problem is worker will only know if it can consume the job after reading its details, but not at the moment of taking the id from the queue.

I originally thought I can put this sequence of operations into a transaction,

  1. Peek the queue.
  2. Read job details from hash and check if consumable.
  3. If yes, take its id from queue, otherwise do nothing.

However this kind of transaction involves both queue and hash data. After reading Redis's transaction support I'm not sure if this is achievable. Please help to advise what approach should I take.

Upvotes: 1

Views: 1872

Answers (2)

Joseph Simpson
Joseph Simpson

Reputation: 4183

If you want to avoid the polling/peeking/race-conditions you could consider the following:

  1. maintain your existing queue as the generic input queue and have a lightweight triage type worker (one is probably enough but you could have more for redundancy) pop items from this queue, using a blocking pop to avoid polling, and based on your job allocation logic push a new item onto a separate queue per worker-type

  2. create multiple workers where each makes blocking pops from its own worker-type queue

One advantage of breaking your queue into multiple queues is that you get more visibility in your monitoring as you can check the length of each queue and see which types of workers are running slowly

It may not be suitable for production usage, but you may be interested in Disque, an in-memory distributed job queue, created by the same author as Redis (and based on Redis code).

Upvotes: 1

mp911de
mp911de

Reputation: 18119

Redis transactions are slightly different from relational database transactions as they are better described as conditional batching. A transaction is a pile of commands that is QUEUED at the time issuing the command. Once you EXECute the transaction, the commands get executed, and the command responses are returned in the response of the EXEC command.

From my point of view, there's no need for a transaction (yet). Regularly peeking the queue is idempotent, so nothing breaks if it happens multiple times. Same for reading the job details. You just should expect the job details disappear when trying to read since another node might have been faster and has handled the job. That's a typical race condition and identifying these early is beneficial.

Now comes the crucial part: Taking the job from the queue is usually a BLPOP/BRPOP to guarantee atomicity. You don't state what should happen with the job details once the job is done. I'd assume removing the hash. So BLPOP the queue and DEL the job hash would be candidates to put them into a transaction, but this depends on your use case and the conditionals. In particular at-least-once and at-most-once behavior.

Upvotes: 1

Related Questions