Reputation: 1472
Problem: Have more than one instance of consumer app running. If two messages related to an entity is sent over rabbitmq queue consecutively. The second event will have all the changes in the first event so, we need not process the first one at all. How do we do that?
Thoughts:
Sequence numbering the events and having a column in the entity table to represent it. When an event is processed, seq of the event is matched again table seq num. If it's more than the table seq num, we persist the entity changes. If not, drop it.
Any way to check with RabbitMQ if there is a new event related to the same entity that's been sent or acknowledged? If yes, check it's seq num and do validation. There is a risk of still overriding the new changes.
Not sure, if there is a better way. Kindly provide your thoughts.
Upvotes: 0
Views: 811
Reputation: 3070
Rabbitmq queues are ordered by publish order.
If you use direct exchange with one consumer, there should not be a problem. But if you use more consumer for scaling order of consuming is not guarenteed.
So you have to handle it manually.
I assume your events occured when something changed on your data. So you can use data versioning.
For example:
First event: SomethingHappenedEvent { entityId: y, dataVersion: x, otherPayloads... }
Second event: SomethingHappenedEvent { entityId: y, dataVersion: x+1, otherPayloads... }
When one consumer gets first event an another gets second. First consumer search the data with version x but not found so you can discard the event. Because the version of data updated and second event occured.
Another solution is to using distributed lock with something like redis. You can lock the consumers while consuming an event for same entity. Thus one consumer does one job at a time for a specific entity.
I don't know is those a usable solution for your case but it might give an idea.
Upvotes: 2