Richa Gupta
Richa Gupta

Reputation: 153

Kafka consumer crashing

My application consumes data from Kafka 0.9.

Currently, we fetch message, process and commit. During this process, if the consumer fails to send heartBeat because of more processing time, consumer coordinator assumes that the consumer is dead, and it rebalances the partition. So this overall thing results in a decreased number of consumers. And after certain duration, our data processing halts.

How should I handle this kind of application failure?

Is there any way to keep consumers alive or span a new consumer if the coordinator finds its dead?

Upvotes: 2

Views: 1366

Answers (3)

Darshnik Swamy
Darshnik Swamy

Reputation: 71

You need to increase the value of max.poll.interval.ms This places an upper bound on the amount of time that the consumer can be idle before fetching more records. For other consumer heartbeat configs, please refer to this

Upvotes: 0

OneCricketeer
OneCricketeer

Reputation: 192023

One obvious solution would be to profile your code that processes the messages and try to do less blocking work there. For example, HTTP or database calls on the same thread

The logs should be telling you to reduce max.poll.records (do less total processing between polls) or increase max.poll.interval.ms (increase the wait time between polls)

There are other settings for consumer heartbeats, but start with these

Upvotes: 1

Gary Russell
Gary Russell

Reputation: 174799

0.9 is very very old; there have been many improvements in this area. See KIP-62.

A common issue the new consumer is the mix of its single-threaded design and the need to maintain liveness by sending heartbeats to the coordinator. We recommend users do message processing and partition initialization/clean from the same thread as the consumer's poll loop, but if this takes longer than the configured session timeout, the consumer is removed from the group and its partitions are assigned to other members. ...

If you are using Spring for Apache Kafka you should be using at least version 1.3.10; earlier versions are no longer supported.

The current version is 2.4.3.

If you are using a current spring-kafka version, then you need to reduce max.poll.records or increase max.poll.interval.ms.

Upvotes: 1

Related Questions