Ildar Nurislamov
Ildar Nurislamov

Reputation: 1

Kafka: ZstdIOException: Cannot get ByteBuffer of size 131075 from the BufferPool

We have Kafka cluster of 3 nodes using bitnami/kafka:3.6.0 image and Kraft protocol.

Also we use zstd compression set on topics and producers.

Suddenly one of the node started to spam following errors in logs:

[2024-02-03 10:01:12,699] ERROR [ReplicaManager broker=1] Error processing append operation on partition topic_name (kafka.server.ReplicaManager)
org.apache.kafka.common.KafkaException: com.github.luben.zstd.ZstdIOException: Cannot get ByteBuffer of size 131075 from the BufferPool
    at org.apache.kafka.common.compress.ZstdFactory.wrapForInput(ZstdFactory.java:70)
    at org.apache.kafka.common.record.CompressionType$5.wrapForInput(CompressionType.java:155)
    at org.apache.kafka.common.record.DefaultRecordBatch.recordInputStream(DefaultRecordBatch.java:273)
    at org.apache.kafka.common.record.DefaultRecordBatch.compressedIterator(DefaultRecordBatch.java:277)
    at org.apache.kafka.common.record.DefaultRecordBatch.skipKeyValueIterator(DefaultRecordBatch.java:352)
    at org.apache.kafka.storage.internals.log.LogValidator.validateMessagesAndAssignOffsetsCompressed(LogValidator.java:358)
    at org.apache.kafka.storage.internals.log.LogValidator.validateMessagesAndAssignOffsets(LogValidator.java:165)
    at kafka.log.UnifiedLog.$anonfun$append$2(UnifiedLog.scala:805)
    at kafka.log.UnifiedLog.append(UnifiedLog.scala:1845)
    at kafka.log.UnifiedLog.appendAsLeader(UnifiedLog.scala:719)
    at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1313)
    at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1301)
    at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.scala:1210)
    at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
    at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
    at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
    at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
    at scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
    at scala.collection.TraversableLike.map(TraversableLike.scala:286)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
    at scala.collection.AbstractTraversable.map(Traversable.scala:108)
    at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:1198)
    at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:754)
    at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:686)
    at kafka.server.KafkaApis.handle(KafkaApis.scala:180)
    at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:149)
    at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: com.github.luben.zstd.ZstdIOException: Cannot get ByteBuffer of size 131075 from the BufferPool
    at com.github.luben.zstd.ZstdInputStreamNoFinalizer.<init>(ZstdInputStreamNoFinalizer.java:67)
    at org.apache.kafka.common.compress.ZstdFactory.wrapForInput(ZstdFactory.java:68)
    ... 27 more

On client sides that caused a lot of producer's errors: Unknown broker error but it seems that all messages was retried to another broker - no data loss was noticed.

I couldn't find mentions of such error in the internet.

Restarting the broker fixed the issue at least for now.

But what if this problem reappear and with 2 broker at the same time.

Upvotes: 0

Views: 71

Answers (1)

user2817340
user2817340

Reputation: 13

I had the same issue. In my case upgrading kafka to 3.6.1 helped, it was most likely resolved in this bugfix.

Upvotes: 0

Related Questions