Lucas Mendes Sales
Lucas Mendes Sales

Reputation: 1

RabbitMQ and Kubernetes - Queue exclusive to a pod

I'm trying to solve a problem where the main reason is that I need to split a big message, into small messages before sending it to RabbitMQ (can't change that).

But doing this, creates another problem: The consumer of this message is in a Kubernetes deployment with 2 replicas of the pod. But the messages created in the split CANT'T be processed in both of them (it would create a huge problem).

So, the question is: how can I make sure that only one of the pods will be consuming the messages??

My first idea was to create a queue only for this case, and put an env variable with it's routing key in only one of the pods. But seems like it is kinda anti-pattern, so I'm trying to avoid this approach.

Upvotes: 0

Views: 692

Answers (1)

JGK
JGK

Reputation: 4168

Not a direct answer to you question, but I recommend you should think about your solution.

For performance and availability reasons we restricted the maximum message size to 256 KB per message in our message queuing infrastructure. For larger messages, the user data is transmitted out-of-band and only a message with download information is sent that the payload is available for download somewhere else. We decided to use MinIO which is S3-compatible as storage for out-of-band storage of user data. The workflow can be described in pseudo code the following.

Producer

if data > 255kb
  s3handle = s3connect(username, password, connection_url)
  s3handle.s3upload(bucketname, filename, data)
  download_url = s3handle.s3getObjectUrl(bucketname, filename, duration)
  message.send(queue, download_url)
else
  message.send(queue, data)

Consumer

data = message.read(queue)
if (data of type url)
  contents = getUrl(data)
else
  contents = data

We've been using this for many years with no problems.

Upvotes: 1

Related Questions