Swapnil
Swapnil

Reputation: 864

Springboot Kafka - Consumer idempotence

I have a Spring-boot application which listens to kafka. To avoid duplicate processing, I am trying to do manual commit. For this I referred Commit Asynchronously a message just after reading from topic. But I am stuck at how can I achieve consumer idempotence so that records will not get processed twice.

Upvotes: 1

Views: 2792

Answers (1)

Gary Russell
Gary Russell

Reputation: 174554

There is no such thing as an idempotent (exactly once) consumer with Kafka.

Kafka does provide exactly once semantics for

kafkaRead -> process -> kafkaWrite

But the "exactly once" applies to the whole flow only. The process step is at least once.

In other words, the offset for the read is only committed if the write is successful. If the write fails, the read/process/write will be performed again.

This is implemented using Kafka transactions.

If you are interacting with some other store in the process step (or are not doing kafka writes at all - kafkaRead -> process) you have to write your own idempotent (de-duplication) code.

But this is relatively easy because the consumer record has a unique key via topic/partition/offset - just store those with the data and check that you have not already processed that record.

Kafka does support idempotent producers.

Upvotes: 7

Related Questions