user2416
user2416

Reputation: 233

Debezium postgres kafka connector is failing with java heap space issue

we have 13 kafka debezium postgres connectors running on Strimzi kafkaconnect cluster. One of them is failing with Caused by: java.lang.OutOfMemoryError: Java heap space . Increased jvm options from 2g to 4g, but still its failing with the same issue.

complete log:

java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOfRange(Arrays.java:3664)
    at java.lang.String.<init>(String.java:207)
    at com.fasterxml.jackson.core.util.TextBuffer.setCurrentAndReturn(TextBuffer.java:696)
    at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._finishAndReturnString(UTF8StreamJsonParser.java:2405)
    at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.getValueAsString(UTF8StreamJsonParser.java:312)
    at io.debezium.document.JacksonReader.parseArray(JacksonReader.java:219)
    at io.debezium.document.JacksonReader.parseDocument(JacksonReader.java:131)
    at io.debezium.document.JacksonReader.parseArray(JacksonReader.java:213)
    at io.debezium.document.JacksonReader.parseDocument(JacksonReader.java:131)
    at io.debezium.document.JacksonReader.parse(JacksonReader.java:102)
    at io.debezium.document.JacksonReader.read(JacksonReader.java:72)
    at io.debezium.connector.postgresql.connection.wal2json.NonStreamingWal2JsonMessageDecoder.processMessage(NonStreamingWal2JsonMessageDecoder.java:54)
    at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.deserializeMessages(PostgresReplicationConnection.java:418)
    at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.readPending(PostgresReplicationConnection.java:412)
    at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:119)
    at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:99)
    at io.debezium.pipeline.ChangeEventSourceCoordinator$$Lambda$464/1759003957.run(Unknown Source)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)```

Upvotes: 1

Views: 3346

Answers (2)

Jiri Pechanec
Jiri Pechanec

Reputation: 1976

This looks like you have a very large transaction message coming and the parsing fails due to memory constraints. wal2json_streaming should split the message into smaller chunks preventing this problem.

Genreally if possible please use either protobuf or pgoutput decoders as they are streaming messages from database per change not per transaction.

Upvotes: 1

QuickSilver
QuickSilver

Reputation: 4045

Try tuning below Debezium props

  • Increase max.batch.size
  • Decrease max.queue.size
  • Tune your offset.flush.interval.ms as your application requirement

Upvotes: 1

Related Questions