aKumara
aKumara

Reputation: 401

With C++ API, How to handle FATAL Kafka produce errors with librdkafka?

enable.idempotence=true

librdkafka version is 1.6.0
Our external Kafka readers expect messages without any gaps or duplicates

When producing with librdkafka C++ API below 3 type of errors can be detected,

  1. From event callback, void RdKafka::EventCb::event_cb(RdKafka::Event& event), with event.fatal() == true.
  2. From Delivery callback, void RdKafka::DeliveryReportCb::dr_cb(RdKafka::Message& message), with "message.err() != RdKafka::ERR_NO_ERROR" OR "message.status() != RdKafka::Message::MSG_STATUS_PERSISTED"
  3. Return value of RdKafka::Producer::produce, is not "RdKafka::ERR_NO_ERROR" nor RdKafka::ERR__QUEUE_FULL

When any of these errors occur what is the recommended error handling procedure ?
Terminate ?
OR
Is it correct to [All the steps from 1 to 4 are part of the suggested solution]

  1. Destroy existing RdKafka::Producer object.
  2. Re-create RdKafka::Producer object.
  3. Using a consumer, read the last message that was written in the topic partition.
  4. Produce starting from next message from above #4 last written msg
    OR
    Any other ?

Upvotes: 0

Views: 616

Answers (1)

indev
indev

Reputation: 165

librdkafka handles the error based on it's severity.

non-permanent errors will be handled internally by retrying the failed messages.

for permanent errors like broker down, message delivery failures (on dr_msg_cb) you need to call rd_kafka_poll() regularly to retry the failed messages.

Source :

https://github.com/confluentinc/librdkafka/blob/master/INTRODUCTION.md#message-reliability

https://github.com/confluentinc/librdkafka/wiki/Error-handling

Upvotes: 1

Related Questions