Reputation: 256
I have an exchange with a type of topic
that only redirects messages to queue payments
Somewhere in the future, I will decide to add another queue payment_analyze
to analyze all old and new messages that have been enqueued.
durable exchanges and queues survive rabbit MQ restarts, persistent messages get written to disk but when binding a new queue to an old durable exchange, old messages do not get redirected (only new ones do get redirected)
From my understanding, this is the intended behavior as exchanges do not store messages and only act as a "proxy"
How do I achieve this?
Possible Solution
Creating a queue named parking
and adding every enqueued message to it, whenever a new queue is added, consume messages from parking
without acknowledging to keep the new queue "semi" up to date.
Upvotes: 4
Views: 989
Reputation: 4190
Even though your configured persistent messages on the payments
queue, this just means messages will survive a broker restart - once a message has been consumed and acknowledged it would be removed.
If you know you're going to need the payment_analyze
queue at some point in the future, is it viable to just create this queue/binding upfront and route messages to both payment_analyze
and payments
? Messages on the payment_analyze
will bank up until you're ready to start consuming them. Note: If you're producing a large number of messages this approach might result in storage issues...
As an alternative, you could write the messages to BLOB storage (or some other data store) as part of your payments
queue consumer (or a different queue/consumer altogether) and then when you're ready to introduce the payment_analyze
queue, you could write a script to read all the old messages from BLOB storage and send them to the RabbitMQ exchange. With 'topic' exchanges - see here - you can probably be clever with wildcards and routing keys in your queue bindings to ensure both old messages (from BLOB storage) as well as new messages are both routed to the payment_analyze
queue, but only new messages are routed to the payments
queue (so that your payments
queue consumer is not reprocessing old messages).
Another option (assuming you're not overly invested in RabbitMQ) could be to consider Apache Kafka instead which deals with this scenario quite nicely as messages aren't automatically removed from a partition once they've been processed by a subscriber.
Anyways, just a few options to consider...
Upvotes: 1