Reputation: 278
Loading of connector:
{
"name": "jdbc-source-test",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url": "jdbc:mysql://localhost:3306/test?user=root&password=password",
"table.whitelist":"test",
"mode": "timestamp",
"timestamp.column.name":"create_time",
"topic.prefix": "test-mysql-jdbc-",
"name":"jdbc-source-test"
}
}
Put next message to the log:
[2018-12-12 17:33:14,225] ERROR WorkerSourceTask{id=jdbc-source-test-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:177)
org.apache.kafka.connect.errors.ConnectException: Cannot make incremental queries using timestamp columns [create_time] on `test`.`test` because all of these columns nullable.
What I suggest it doens't work because this column has type bigint(20). Is there any workarounds for this? Confluent version - 5.0.1.
Upvotes: 2
Views: 2294
Reputation: 8253
Easy way to fix this issue, adding the "validate.non.null": "false" into the config
{
"name": "jdbc-source-test",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.url": "jdbc:mysql://localhost:3306/test?user=root&password=password",
"table.whitelist":"test",
"mode": "timestamp",
"timestamp.column.name":"create_time",
"topic.prefix": "test-mysql-jdbc-",
"validate.non.null": "false"
}
}
Upvotes: 2
Reputation: 243
I had the same error on Postgresql, and setting the timestamp column mentioned in the error NOT NULL in the database fixed it.
Another option is to use validate.non.null
: false
in the connector configuration.
validate.non.null
: By default, the JDBC connector will validate that all incrementing and timestamp tables have NOT NULL set for the columns being used as their ID/timestamp.If the tables don’t, JDBC connector will fail to start. Setting this to false will disable these checks.
Upvotes: 4