Reputation: 87
I have a requirement where I want to publish the same kafka message to 2 replicas of kubernetes pods so that I can keep both the replicas in sync. As our nodes get repaved frequently I don't want us to lose any data. On the other hand we need our pod to be highly available and scalable. Any help on above would be appreciated.
Upvotes: 5
Views: 2438
Reputation: 39427
You should be able to achieve resilience by just having one of the pods consume each message. To achieve this you setup your kafka library in a way that your consumers are in the same consumer group.
If you really want to consume each message twice or more, you can have 2 or more consumer groups and assign a number of pods to each group. each consumer group will consume the message only once.
For spring this link could be helpful, look at the consumer props
Upvotes: 2