Reputation: 21
In a single cluster/instance installation of kafka/zookeeper v2.4.1 (binary kafka_2.13-2.4.1.tgz) in Windows Subsystem for Linux (WSL) with Ubuntu 18.04, the kafka broker shuts down unexpectedly during the cleanup of log files with the below error message:
ERROR Failed to clean up log for __consumer_offsets-11 in dir /tmp/kafka-logs due to IOException (kafka.server.LogDirFailureChannel)
java.io.IOException: Invalid argument
at java.io.RandomAccessFile.setLength(Native Method)
at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:188)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.scala:17)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.log.AbstractIndex.resize(AbstractIndex.scala:174)
at kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:240)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.scala:17)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:240)
at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:508)
at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:595)
at kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:530)
at kafka.log.Cleaner.$anonfun$doClean$6$adapted(LogCleaner.scala:529)
at scala.collection.immutable.List.foreach(List.scala:305)
at kafka.log.Cleaner.doClean(LogCleaner.scala:529)
at kafka.log.Cleaner.clean(LogCleaner.scala:503)
at kafka.log.LogCleaner$CleanerThread.cleanLog(LogCleaner.scala:372)
at kafka.log.LogCleaner$CleanerThread.cleanFilthiestLog(LogCleaner.scala:345)
at kafka.log.LogCleaner$CleanerThread.tryCleanFilthiestLog(LogCleaner.scala:325)
at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:314)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)
The directory __consumer_offsets-11 that fails to be cleaned up exists:
I tried the below:
This error happens many times on a daily basis, independently of the log retention configuration. The server configuration properties (server.properties) are the default ones:
broker.id=0
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
Upvotes: 2
Views: 1748
Reputation: 39880
The problem seems to be your log.dirs
which is currently set to tmp/kafka-logs
. This might cause some troubles when your machine turns off as the content of tmp/
will be purged.
Try to change the path to a permanent location instead of tmp/
Upvotes: 0