Reputation: 675
I use Druid Kafka Indexing service to load my own streams from Kafka.
I use Load from Kafka tutorial to implement it.
Kafka has all setting by default (just extracted from tgz).
When I start imply-2.2.3 (Druid) with empty data (after var remove folder) all works properly.
But when I stop Kafka 2.11-0.10.2.0 and start it again occurs error and Druid Kafka ingestion no more works until I stop Imply(Druid) and remove all data (i.e. remove var folder).
Sometimes Druid just does not ingest data from Kafka even no errors in Kafka. When I remove var folder in Druid all is repared until next same error.
Error:
kafka.common.NoReplicaOnlineException: No replica for partition [__consumer_offsets,19] is alive. Live brokers are: [Set()], Assigned replicas are: [List(0)]
at kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:73) ~[kafka_2.11-0.10.2.0.jar:?]
at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:339) ~[kafka_2.11-0.10.2.0.jar:?]
at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:200) [kafka_2.11-0.10.2.0.jar:?]
at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:115) [kafka_2.11-0.10.2.0.jar:?]
at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:112) [kafka_2.11-0.10.2.0.jar:?]
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) [scala-library-2.11.8.jar:?]
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) [scala-library-2.11.8.jar:?]
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) [scala-library-2.11.8.jar:?]
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) [scala-library-2.11.8.jar:?]
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) [scala-library-2.11.8.jar:?]
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) [scala-library-2.11.8.jar:?]
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) [scala-library-2.11.8.jar:?]
at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:112) [kafka_2.11-0.10.2.0.jar:?]
at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:67) [kafka_2.11-0.10.2.0.jar:?]
at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:342) [kafka_2.11-0.10.2.0.jar:?]
at kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:160) [kafka_2.11-0.10.2.0.jar:?]
at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:85) [kafka_2.11-0.10.2.0.jar:?]
at kafka.server.ZookeeperLeaderElector$$anonfun$startup$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:51) [kafka_2.11-0.10.2.0.jar:?]
at kafka.server.ZookeeperLeaderElector$$anonfun$startup$1.apply(ZookeeperLeaderElector.scala:49) [kafka_2.11-0.10.2.0.jar:?]
at kafka.server.ZookeeperLeaderElector$$anonfun$startup$1.apply(ZookeeperLeaderElector.scala:49) [kafka_2.11-0.10.2.0.jar:?]
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213) [kafka_2.11-0.10.2.0.jar:?]
at kafka.server.ZookeeperLeaderElector.startup(ZookeeperLeaderElector.scala:49) [kafka_2.11-0.10.2.0.jar:?]
at kafka.controller.KafkaController$$anonfun$startup$1.apply$mcV$sp(KafkaController.scala:681) [kafka_2.11-0.10.2.0.jar:?]
at kafka.controller.KafkaController$$anonfun$startup$1.apply(KafkaController.scala:677) [kafka_2.11-0.10.2.0.jar:?]
at kafka.controller.KafkaController$$anonfun$startup$1.apply(KafkaController.scala:677) [kafka_2.11-0.10.2.0.jar:?]
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213) [kafka_2.11-0.10.2.0.jar:?]
at kafka.controller.KafkaController.startup(KafkaController.scala:677) [kafka_2.11-0.10.2.0.jar:?]
at kafka.server.KafkaServer.startup(KafkaServer.scala:224) [kafka_2.11-0.10.2.0.jar:?]
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39) [kafka_2.11-0.10.2.0.jar:?]
at kafka.Kafka$.main(Kafka.scala:67) [kafka_2.11-0.10.2.0.jar:?]
at kafka.Kafka.main(Kafka.scala) [kafka_2.11-0.10.2.0.jar:?]
The steps that I did:
1. Start Imply:
bin/supervise -c conf/supervise/quickstart.conf
2. Start Kafka:
./bin/kafka-server-start.sh config/server.properties
3. Create topic:
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic wikiticker
4. Enable Druid Kafka ingestion:
curl -XPOST -H'Content-Type: application/json' -d @quickstart/wikiticker-kafka-supervisor.json http://localhost:8090/druid/indexer/v1/supervisor
5. Post events to the Kafka topic which were then ingested into Druid by the Kafka indexing service
In all .properties files (common.runtime.properties, broker, coordinator, historical, middlemanager, overlord) added property:
druid.extensions.loadList=["druid-caffeine-cache", "druid-histogram", "druid-datasketches", "druid-kafka-indexing-service"]
which includes "druid-kafka-indexing-service" to provide ingesting service.
I believe that such problems shouldn't occur with Druid Kafka Indexing.
Is there ways how to figure out that issue?
Upvotes: 0
Views: 638
Reputation: 675
I fixed it by adding 3 Kafka brokers and setup all topics with replication factor of 3 for Kafka stability.
In Druid I fixed problem by increasing druid.worker.capacity
in middleManager and decreasing taskDuration
in ioConfig
of supervisor spec.
Details in another question.
Upvotes: 0
Reputation: 8345
Looks like you have a single node Kafka cluster and that the only broker node is down. That's not a very fault tolerant setup. You should have 3 Kafka brokers and setup all topics with replication factor of 3 so that the system will work even if one or two Kafka brokers are down. Single node clusters are typically only used for development.
Upvotes: 0
Reputation: 2313
The message indicates broker with id 0 is down and because it is the only broker hosting that partition, you can't use that partition right now. You have to make sure broker 0 is up and serving.
Upvotes: 0