Reputation: 7524
After starting Kafka Connect (connect-standalone
), my task fails immediately after starting with:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:343)
at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:180)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:248)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
at org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:316)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:222)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
There's a mention of heap space in some Kafka documentation, telling you to try it with "the default" and only modifying it if there are problems, but there are no instructions to modify the heap space.
Upvotes: 66
Views: 136107
Reputation: 1140
I had the same problem but got resolved the moment I would authenticate with the remote server using security.protocol", "SASL_SSL" and other sass configurations
Upvotes: 0
Reputation: 907
for windows
bin\windows\kafka-server-start.bat
for linux/mac
bin\kafka-server-start.sh
update these values
set KAFKA_HEAP_OPTS=-Xmx2G -Xms2G
Upvotes: 0
Reputation: 6487
In my case, using a Spring Boot 2.7.8 application leveraging Spring Boot Kafka auto-configuration (no configuration in Java code), the problem was caused by the security protocol not being set (apparently the default value is PLAINTEXT
). Other errors I got together with java.lang.OutOfMemoryError: Java heap space
are:
Stopping container due to an Error
Error while stopping the container:
Uncaught exception in thread 'kafka-producer-network-thread | producer-':
The solution was to add the following lines to my application.properties
:
spring.kafka.consumer.security.protocol=SSL
spring.kafka.producer.security.protocol=SSL
My attempt to fix it with just:
spring.kafka.security.protocol=SSL
did not work.
Upvotes: 3
Reputation: 742
I found another cause of this issue this morning. I was seeing this same exception except that I'm not using SSL and my messages are very small. The issue in my case turned out to be a misconfigured bootstrap-servers
url. If you configure that URL to be a server and port that is open but incorrect, you can cause this same exception. The Kafka folks are aware of the general issue and are tracking it here: https://cwiki.apache.org/confluence/display/KAFKA/KIP-498%3A+Add+client-side+configuration+for+maximum+response+size+to+protect+against+OOM
Upvotes: 7
Reputation: 3739
When you have Kafka problems with
java.lang.OutOfMemoryError: Java heap space
it doesn't necessarily mean that it's a memory problem. Several Kafka admin tools like kafka-topics.sh
will mask the true error with this when trying to connect to an SSL PORT. The true (masked) error is SSL handshake failed
!
See this issue: https://issues.apache.org/jira/browse/KAFKA-4090
The solution is to include a properties file in your command (for kafka-topics.sh
this would be --command-config
) and to absolutely include this line:
security.protocol=SSL
Upvotes: 131
Reputation: 7524
You can control the max and initial heap size by setting the KAFKA_HEAP_OPTS
environment variable.
The following example sets a starting size of 512 MB and a maximum size of 1 GB:
KAFKA_HEAP_OPTS="-Xms512m -Xmx1g" connect-standalone connect-worker.properties connect-s3-sink.properties
When running a Kafka command such as connect-standalone
, the kafka-run-class
script is invoked, which sets a default heap size of 256 MB in the KAFKA_HEAP_OPTS
environment variable if it is not already set.
Upvotes: 59
Reputation: 75
Even I was facing the issue could not start my producer and consumer for a given topic. Also deleted all unnecessary log files and topics.Even though that's not related to the issue.
Changing the kafka-run-class.sh
did not work for me. I changed the below files
kafka-console-consumer.sh
kafka-console-producer.sh
and stopped getting OOM error. Both consumer and producer worked fine after this.
Increased the size to KAFKA_HEAP_OPTS="-Xmx1G"
was 512m earlier.
Upvotes: 2