Owais Ajaz
Owais Ajaz

Reputation: 264

Kafka Connector - Tolerance exceeded in error handler

I have above 50 source connectors over sql server but two of them are going in error, please tell me what could be the reason as we have limited access to kafka server.

{
    "name": "xxxxxxxxxxxxx",
    "connector": {
        "state": "RUNNING",
        "worker_id": "xxxxxxxxxxxxxx:8083"
    },
    "tasks": [
        {
            "state": "FAILED",
            "trace": "org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)\n\tat org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:44)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:292)\n\tat org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:228)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: org.apache.kafka.connect.errors.DataException: Schema required for [updating schema metadata]\n\tat org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)\n\tat org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:64)\n\tat org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:44)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)\n\t... 11 more\n",
            "id": 0,
            "worker_id": "xxxxxxxxxxxxx:8083"
        }
    ],
    "type": "source"
}

Source Connector configurations:

{
"name": "xxxxxxxx",
"config": {
    "connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
    "database.history.kafka.topic": "dbhistory.fullfillment.ecom",
    "transforms": "unwrap,setSchemaName",
    "internal.key.converter.schemas.enable": "false",
    "offset.storage.partitons": "2",
    "include.schema.changes": "false",
    "table.whitelist": "dbo.abc",
    "decimal.handling.mode": "double",
    "transforms.unwrap.drop.tombstones": "false",
    "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
    "value.converter": "io.confluent.connect.avro.AvroConverter",
    "key.converter": "org.apache.kafka.connect.json.JsonConverter",
    "transforms.setSchemaName.schema.name": "com.data.meta.avro.abc",
    "database.dbname": "xxxxxx",
    "database.user": "xxxxxx",
    "database.history.kafka.bootstrap.servers": "xxxxxxxxxxxx",
    "database.server.name": "xxxxxxx",
    "database.port": "xxxxxx",
    "transforms.setSchemaName.type": "org.apache.kafka.connect.transforms.SetSchemaMetadata$Value",
    "key.converter.schemas.enable": "false",
    "value.converter.schema.registry.url": "http://xxxxxxxxxx:8081",
    "internal.key.converter": "org.apache.kafka.connect.json.JsonConverter",
    "database.hostname": "xxxxxxx",
    "database.password": "xxxxxxx",
    "internal.value.converter.schemas.enable": "false",
    "internal.value.converter": "org.apache.kafka.connect.json.JsonConverter",
    "name": "xxxxxxxxxxx"
}

}

Upvotes: 6

Views: 14353

Answers (1)

Robin Moffatt
Robin Moffatt

Reputation: 32140

If you look at the stack trace in the trace field, and replacing the \n and \t characters within with newline and tabs, you will see:

org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
    at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
    at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
    at org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:44)
    at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:292)
    at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:228)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Schema required for [updating schema metadata]
    at org.apache.kafka.connect.transforms.util.Requirements.requireSchema(Requirements.java:31)
    at org.apache.kafka.connect.transforms.SetSchemaMetadata.apply(SetSchemaMetadata.java:64)
    at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:44)
    at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
    at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
    ... 11 more

And thus the cause of your error is being thrown in the SetSchemaMetadata Single Message Transform: org.apache.kafka.connect.errors.DataException: Schema required for [updating schema metadata]

I would check the configuration on your connectors, isolate the ones that have failed, and look at the Single Message Transform configuration. This issue might be relevant.

Upvotes: 4

Related Questions