Reputation: 1
creating topic with 3 partitions:
$ kafka-topics.sh --topic fifth_topic --create --partitions 3 --replication-factor 1 --bootstrap-server=localhost:9092
producing data into this topic
$ kafka-console-producer.sh --bootstrap-server imeserver:9092 --topic fifth_topic
consuming by three consumers withing same consumer group from same local host
$ kafka-console-consumer.sh --bootstrap-server imeserver:9092 --topic fifth_topic --group consumer_grp2 --from-beginning
$ kafka-console-consumer.sh --bootstrap-server imeserver:9092 --topic fifth_topic --group consumer_grp2 --from-beginning
$ kafka-console-consumer.sh --bootstrap-server imeserver:9092 --topic fifth_topic --group consumer_grp2 --from-beginning
describing the consumer group :
[ime@IMESERVER ~]$ kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group 'consumer_grp2'
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
consumer_grp2 fifth_topic 2 0 0 0 console-consumer-b6abaa2e-bf84-4919-8606-183dda964c17 /127.0.0.1 console-consumer
consumer_grp2 fifth_topic 0 0 0 0 console-consumer-2077d1c1-5a79-4b02-93b5-fd7c22d584e4 /127.0.0.1 console-consumer
consumer_grp2 fifth_topic 1 40 40 0 console-consumer-8df7c698-315a-463c-b0dd-13fa6932011f /127.0.0.1 console-consumer
as you can see all pushing are being pushed to only one partition and there is no disribution to any other partition .
I was expecting that writing would go to multiple partitions and not only one partition .
Upvotes: 0
Views: 647
Reputation: 191904
Your consumers are doing the appropriate thing; they can't be assigned overlapping partitions in the same group.
Unclear what data you're producing, but since you're not using parse.keys
property, then the keys are null, so data within a producer request will be round-robined between partitions. If you're only sending one event at a time on the CLI, however, (stopping the command rather than entering multiple lines at once), it's possible it "randomly" picks partition 2, but the random seed isn't truly random (mostly because, it doesn't really need to be since all partitions are equal, in terms of where requests can be sent), so you end up with the same partition each time.
Alternatively, Kafka sends data in batches, rather than only one event at a time. It's possible that the whole batch had been assigned to the same partition.
Upvotes: 2