Reputation: 162
To explain with an example,
I am setting spring.kafka.consumer.max-poll-records=1000
,
but I want the listener to process only 100 records at a time. That is, break the 1000 polled records in to 10 sub-batches of 100 each and pass them on to the listener 10 times one after another.
@KafkaListener(topics = "abcd-topic")
public void processRecords(List<ConsumerRecord<String, Word>> consumerRecords)
In above consumerRecords size should be 100 only not 1000.
I know that i can instead set the max poll records
to 100, but in my case the network latency and broker connectivity is very poor and i want to avoid the extra 10 network trips and connections with broker. And processing 1000 records in one go is something i want to avoid to handle exceptions or failures more flexibly.
I am more interested in knowing if it is possible to break the polled records in to sub batches.
Upvotes: 0
Views: 38