Reputation: 2036
I am creating kafka topic as following:
kafka-topics --create --zookeeper xx.xxx.xx:2181 --replication-factor 2 --partitions 200 --topic test6 --config retention.ms=900000
and then I produce messages with golang using the following library:
"gopkg.in/confluentinc/confluent-kafka-go.v1/kafka"
the producer configuration looks like this:
for _, message := range bigslice {
topic := "test6"
p.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic},
Value: []byte(message),
}, nil)
}
the problem that I've sent more than 200K messages but they all lands in partition 0.
what could be wrong in this situation?
Upvotes: 2
Views: 1388
Reputation: 3
You provide no key when producing, so it goes to the same partition. I suggest you to read at least this https://medium.com/event-driven-utopia/understanding-kafka-topic-partitions-ae40f80552e8
Upvotes: 0
Reputation: 39790
Messages with the same key are being added to the same partition. If this is not the case, then try to include Partition: kafka.PartitionAny
:
for _, message := range bigslice {
topic := "test6"
p.Produce(&kafka.Message{
TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny},
Value: []byte(message),
}, nil)
}
Upvotes: 2