Hyuck
Hyuck

Reputation: 1

Debezium JDBC Connector's DELETE on New Record State Extraction not working

Debezium JDBC Connector's DELETE doesn't work when the record is unwrapped (New Record State Extraction).

According to documentation,

-- New Record State Extraction
A database DELETE operation causes Debezium to generate two Kafka records:

A record that contains "op": "d", the before row data, and some other fields.

A tombstone record that has the same key as the deleted row and a value of null. This record is a marker for Apache Kafka. It indicates that log compaction can remove all records that have this key.

Instead of dropping the record that contains the before row data, you can configure the event flattening SMT to do one of the following:

Keep the record in the stream and edit it to have only the "value": "null" field.

Keep the record in the stream and edit it to have a value field that contains the key/value pairs that were in the before field with an added "__deleted": "true" entry.
-- JDBC
Delete mode
The Debezium JDBC sink connector can delete rows in the destination database when a DELETE or tombstone event is consumed. By default, the JDBC sink connector does not enable delete mode.

If you want to the connector to remove rows, you must explicitly set delete.enabled=true in the connector configuration. To use this mode you must also set primary.key.fields to a value other than none. The preceding configuration is necessary, because deletes are executed based on the primary key mapping, so if a destination table has no primary key mapping, the connector is unable to delete rows.

Debezium should be abel to perform DELETE when the JDBC sink connector is set to delete.enabled=true and the new record state extraction is set to transforms.unwrap.drop.tombstones=false, transforms.unwrap.delete.handling​.mode=none.

However, it didn't work when tested with Debezium(debezium/connect:2.6.0.Final), MySQL 8.2.0, PostgreSQL 16.2. (MySQL -> PostgreSQL CDC)

I've checked:

Any idea what might be wrong? Is this a bug?

Upvotes: 0

Views: 573

Answers (1)

Dyan
Dyan

Reputation: 81

You should be able to track the deleted records from source table using transforms.unwrap.delete.handling​.mode set to rewrite because the connector will add metadata field __deleted to the messages sent over kafka. In your destination DB, a new column named "deleted" will appear.

This is what I've done to track the deleted records in source table. AFAIK, when using transforms.unwrap, auto deletion on destination table can't be done. I'm still finding some workaround with this issue.

Upvotes: 0

Related Questions