Reputation: 795
I have 3 consumers subscribed to a Kafka topic. A producer publishes 1 message to a topic.
How can I make sure that the message is replicated internally in Kafka and is then consumed by all 3 consumers?
One way would be to not commit the message, but then messages will keep on piling up in the topic.
Upvotes: 1
Views: 3295
Reputation: 495
You should set 3 different consumer group (with different id) for the 3 consumers.
Consumers label themselves with a consumer group name, and each record published to a topic is delivered to one consumer instance within each subscribing consumer group. Consumer instances can be in separate processes or on separate machines.
If all the consumer instances have the same consumer group, then the records will effectively be load balanced over the consumer instances.
If all the consumer instances have different consumer groups, then each record will be broadcast to all the consumer processes.
from https://kafka.apache.org/documentation/
Upvotes: 1
Reputation: 3981
Kafka never replicates the messages. The message will be always routed to a topic / partition only once.
But Kafka is using a concept of consumer groups to differentiate between different groups of consumers and to decide how they should receive the messages. In your case, you have to assign different consumer group ID to each of these consumers. And once you do that, they will start receiving the messages in parallel.
Also, the messages in Kafka are never removed after the consumer consumes them. They will be stored in the topic/partition until they hit the retention limit, which can be based either on time (e.g. keep the messages for one week) or for example by topic size (keep the messages up to 100GB).
Upvotes: 4
Reputation: 336
The only thing you have to do is to assign different group ids to every consumer
Upvotes: 0