Reputation: 40906
I have an interesting conundrum; I use Redis PubSub pattern to emit events within my network of NodeJS microservices. I moved away from RabbitMQ because I needed the messages to be multi-cast, so that multiple microservices could receive and handle the same event. So if I emit a LOGOUT event, both the UserService and the WebSocketService should hear it.
This worked fine until my client deployed in an environment in which multiple instances of each microservice run. Now, given 3 instances of a UserService, all instances connect to Redis and they all receive and handle the LOGOUT event, which is bad.
So on one hand I need the distinct microservices to hear the event, but on the other I need to prevent the duplicate copies of the same microservice from all handling the event. Rock, meet hard place.
My best thinking so far is a bit hacky, but something like:
I don't love this solution because the list will be ever-growing in memory rather than in a transient message. Also I'll have to start adding code to manage which events have already been checked; otherwise every cycle, the whole list would have to be parsed again by all instances of all services.
Ideas welcome.
Upvotes: 5
Views: 3470
Reputation: 79
Hey I faced the same issue after running servers on multiple instances.
So I created a write-lock itself.Whenever I get the subscription of that particular key that you've subscribed I incremented the key regarding that particular document by 1. And by updating I get nModified:1 in my case once it got modified by creating some conditions accordingly.
So once the server sends multiple request by creating some conditions I managed to enter that condition and update that key and once and perform whatever task I wanted too.
You can do the same by creating your own write-lock.
Hail Coding
Upvotes: 0
Reputation: 15390
Redis streams seems to be helpful for your use case. Did you check that?.
We can have multiple consumers groups. Like pub/sub, Each consumer group is expected to receive the message from the stream. (Here stream is like pub/sub channel/topic). However, when there are multiple nodes within a consumer group, only one node will process the event. Not all. Node can acknowledge the message it has processed.
Upvotes: 7
Reputation: 15807
For the scenario you described I would use both the pattern: an events queue (or more specifically an events queue for each cluster of consumers) implemented with redlock and the PubSub pattern to alert all the consumers that a new event is ready to be consumed. Respect to your proposed solution this should be a bit more reactive because there is no polling but the access to the queue(s) is event driven.
Upvotes: 1
Reputation: 1897
Upon being notified, consume the list of events on Redis using LPOP and only act if there's actually something on that list.
This will both cleanup the event list and guarantee that only one consumer will actually execute an action.
Upvotes: 0