Shahid Ghafoor
Shahid Ghafoor

Reputation: 3103

Kafka Connect JDBC Sink - pk.fields for each topic (table) in one sink configuration

With respect to this example debezium-example

I have multiple topics with different primary keys

item (pk : id)
itemDetail (pk :id, itemId)
itemLocation (pk :id, itemId)

jdbc-sink.source

{
"name": "jdbc-sink",
"config": {
    "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
    "tasks.max": "1",
    "topics": "item,itemDetail,itemLocation",
    "connection.url": "jdbc:postgresql://postgres:5432/inventory?user=postgresuser&password=postgrespw",
    "transforms": "unwrap",
    "transforms.unwrap.type": "io.debezium.transforms.UnwrapFromEnvelope",
    "auto.create": "true",
    "insert.mode": "upsert",
    "pk.fields": "id",
    "pk.mode": "record_value"
}
}

how we can specify "pk.fields" for each topic (table)?

Upvotes: 4

Views: 3002

Answers (1)

OneCricketeer
OneCricketeer

Reputation: 191710

I don't think there is such a configuration for a PK mapping per topic.

You will want to make multiple configs for each topic

{
"name": "jdbc-sink-item",
"config": {
    "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
    "tasks.max": "1",
    "topics": "item",
    "pk.fields": "id",

And

{
"name": "jdbc-sink-itemDetail",
"config": {
    "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
    "tasks.max": "1",
    "topics": "itemDetail",
    "pk.fields": "id,itemId",

And so on

Upvotes: 3

Related Questions