Mmoja
Mmoja

Reputation: 13

Kafka in error mode natively

I have a problem with my Kafka. It is always in error mode. I unistalled and installed no change. When I restart no change. These are the logs.

I get the logs when I run, kafka-server-start /usr/local/etc/kafka/server.properties

[2017-06-16 12:13:45,107] INFO KafkaConfig values:
	advertised.host.name = null
	advertised.listeners = null
	advertised.port = null
	authorizer.class.name =
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = 0
	broker.id.generation.enable = true
	broker.rack = null
	compression.type = producer
	connections.max.idle.ms = 600000
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delete.topic.enable = false
	fetch.purgatory.purge.interval.requests = 1000
	group.max.session.timeout.ms = 300000
	group.min.session.timeout.ms = 6000
	host.name =
	inter.broker.listener.name = null
	inter.broker.protocol.version = 0.10.2-IV0
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
	listeners = null
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /usr/local/var/lib/kafka-logs
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.format.version = 0.10.2-IV0
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides =
	message.max.bytes = 1000012
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 1440
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 3
	offsets.topic.segment.bytes = 104857600
	port = 9092
	principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
	producer.purgatory.purge.interval.requests = 1000
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 10000
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism.inter.broker.protocol = GSSAPI
	security.inter.broker.protocol = PLAINTEXT
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = null
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	unclean.leader.election.enable = true
	zookeeper.connect = localhost:2181
	zookeeper.connection.timeout.ms = 6000
	zookeeper.session.timeout.ms = 6000
	zookeeper.set.acl = false
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2017-06-16 12:13:45,209] INFO starting (kafka.server.KafkaServer)
[2017-06-16 12:13:45,213] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2017-06-16 12:13:45,268] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2017-06-16 12:13:45,288] INFO Client environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT (org.apache.zookeeper.ZooKeeper)
[2017-06-16 12:13:45,288] INFO Client environment:host.name=192.168.10.51 (org.apache.zookeeper.ZooKeeper)
[2017-06-16 12:13:45,289] INFO Client environment:java.version=1.8.0_131 (org.apache.zookeeper.ZooKeeper)
[2017-06-16 12:13:45,290] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2017-06-16 12:13:45,290] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)
[2017-06-16 12:13:45,291] INFO Client environment:java.class.path=:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/aopalliance-repackaged-2.5.0-b05.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/argparse4j-0.7.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/connect-api-0.10.2.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/connect-file-0.10.2.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/connect-json-0.10.2.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/connect-runtime-0.10.2.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/connect-transforms-0.10.2.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/guava-18.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/hk2-api-2.5.0-b05.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/hk2-locator-2.5.0-b05.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/hk2-utils-2.5.0-b05.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jackson-annotations-2.8.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jackson-annotations-2.8.5.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jackson-core-2.8.5.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jackson-databind-2.8.5.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jackson-jaxrs-base-2.8.5.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jackson-jaxrs-json-provider-2.8.5.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jackson-module-jaxb-annotations-2.8.5.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/javassist-3.20.0-GA.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/javax.inject-1.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/javax.inject-2.5.0-b05.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/javax.ws.rs-api-2.0.1.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jersey-client-2.24.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jersey-common-2.24.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jersey-container-servlet-2.24.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jersey-container-servlet-core-2.24.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jersey-guava-2.24.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jersey-media-jaxb-2.24.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jersey-server-2.24.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jetty-http-9.2.15.v20160210.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jetty-io-9.2.15.v20160210.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jetty-security-9.2.15.v20160210.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jetty-server-9.2.15.v20160210.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jetty-util-9.2.15.v20160210.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/jopt-simple-5.0.3.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/kafka-clients-0.10.2.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/kafka-log4j-appender-0.10.2.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/kafka-streams-0.10.2.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/kafka-streams-examples-0.10.2.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/kafka-tools-0.10.2.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/kafka_2.11-0.10.2.0-sources.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/kafka_2.11-0.10.2.0-test-sources.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/kafka_2.11-0.10.2.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/log4j-1.2.17.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/lz4-1.3.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/metrics-core-2.2.0.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/reflections-0.9.10.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/rocksdbjni-5.0.1.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/scala-library-2.11.8.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/slf4j-api-1.7.21.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/slf4j-log4j12-1.7.21.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/snappy-java-1.1.2.6.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/validation-api-1.1.0.Final.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/zkclient-0.10.jar:/usr/local/Cellar/kafka/0.10.2.0/libexec/bin/../libs/zookeeper-3.4.9.jar (org.apache.zookeeper.ZooKeeper)
[2017-06-16 12:13:45,300] INFO Client environment:java.library.path=/Users/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2017-06-16 12:13:45,301] INFO Client environment:java.io.tmpdir=/var/folders/rd/v5rrgfr929q8gvxl8y0hqvdm0000gn/T/ (org.apache.zookeeper.ZooKeeper)
[2017-06-16 12:13:45,301] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)

<!-- begin snippet: js hide: false console: true babel: false -->

FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.io.FileNotFoundException: /usr/local/var/lib/kafka-logs/__consumer_offsets-0/00000000000000000000.index (Permission denied)
	at java.io.RandomAccessFile.open0(Native Method)
	at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
	at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
	at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:50)
	at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:52)
	at kafka.log.LogSegment.<init>(LogSegment.scala:72)
	at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:210)
	at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:188)
	at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
	at kafka.log.Log.loadSegments(Log.scala:188)
	at kafka.log.Log.<init>(Log.scala:116)
	at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:157)
	at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)
[2017-06-16 12:13:46,489] INFO [Kafka Server 0], shutting down (kafka.server.KafkaServer)

What can be the issue? I'm on OSX.

Upvotes: 0

Views: 2210

Answers (1)

Daniccan
Daniccan

Reputation: 2795

java.io.FileNotFoundException: /usr/local/var/lib/kafka-logs/__consumer_offsets-0/00000000000000000000.index (Permission denied)

The above line suggests that your kafka process does not have enough permission to write into the following path,

/usr/local/var/lib/kafka-logs

Find the owner and group of the above directory and start the kafka service with either the same user as the owner or change the owner and group of the directory accordingly.

Most probably, you might need to run the kafka service as sudo user.

Upvotes: 1

Related Questions