Devidb
Devidb

Reputation: 112

Kafka connect jdbc sink SQL error handling

I am currently configuring a Kafka JDBC sink connector to write my kafka messages in a Postgres table. All is working fine except the error handling part. Sometimes, messages in my topic have wrong data and so the database constraints fail with an expected SQL EXCEPTION duplicate key...

I would like to put these wrong messages in a DLQ and to commit the offset to process the next messages, so I configured the connector with

"errors.tolerance": "all"
"errors.deadletterqueue.topic.name": "myDLQTopicName"

but it does not change a thing, the connector retries until it crashes.

Is there another configuration I'm missing? I saw only these two in the confluent documentation

(I see in the jdbc connector changelog that the error handling in the put stage is implemented in the version 10.1.0 (CCDB-192) and I'm using the last version of the connector 10.5.1)

Upvotes: 0

Views: 1205

Answers (1)

Related Questions