WestCoastProjects
WestCoastProjects

Reputation: 63082

Kafka does not start on Windows - key not found: \tmp\kafka-logs

I have put some effort into getting Kafka to run on windows 32 (company issued laptop - certainly not my choice..).

I was successful to create a handful of topics. But after stopping/restarting kafka it is unable to re-read those topics. Here is the startup logs

[2014-05-29 12:26:23,097] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [vip_ips_alerts,0],[calls,0],[dropped_calls,0],[calls_online,0],[calls_no_phone,0] (kafka.server.ReplicaFetcherManager)
[2014-05-29 12:26:23,106] ERROR [KafkaApi-0] error when handling request Name:LeaderAndIsrRequest;Version:0;Controller:0;ControllerEpoch:4;CorrelationId:5;ClientId:id_0-host_null-port_9092;Leaders:id:0,host:S80035683-SC01.mycompany.com,port:9092;PartitionState:(vip_ips_alerts,0) -> (LeaderAndIsrInfo:(Leader:0,ISR:0,LeaderEpoch:3,ControllerEpoch:4),ReplicationFactor:1),AllReplicas:0),(calls,0) -> (LeaderAndIsrInfo:(Leader:0,ISR:0,LeaderEpoch:1,ControllerEpoch:4),ReplicationFactor:1),AllReplicas:0),(dropped_calls,0) -> (LeaderAndIsrInfo:(Leader:0,ISR:0,LeaderEpoch:3,ControllerEpoch:4),ReplicationFactor:1),AllReplicas:0),(calls_online,0) -> (LeaderAndIsrInfo:(Leader:0,ISR:0,LeaderEpoch:3,ControllerEpoch:4),ReplicationFactor:1),AllReplicas:0),(calls_no_phone,0) -> (LeaderAndIsrInfo:(Leader:0,ISR:0,LeaderEpoch:3,ControllerEpoch:4),ReplicationFactor:1),AllReplicas:0) (kafka.server.KafkaApis)
java.util.NoSuchElementException: key not found: \tmp\kafka-logs
        at scala.collection.MapLike$class.default(MapLike.scala:225)
        at scala.collection.immutable.Map$Map1.default(Map.scala:107)
        at scala.collection.MapLike$class.apply(MapLike.scala:135)
        at scala.collection.immutable.Map$Map1.apply(Map.scala:107)
        at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:91)
        at kafka.cluster.Partition$$anonfun$makeLeader$2.apply(Partition.scala:175)
        at kafka.cluster.Partition$$anonfun$makeLeader$2.apply(Partition.scala:175)
        at scala.collection.immutable.Set$Set1.foreach(Set.scala:86)
        at kafka.cluster.Partition.makeLeader(Partition.scala:175)
        at kafka.server.ReplicaManager$$anonfun$makeLeaders$5.apply(ReplicaManager.scala:305)
        at kafka.server.ReplicaManager$$anonfun$makeLeaders$5.apply(ReplicaManager.scala:304)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
        at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
        at scala.collection.Iterator$class.foreach(Iterator.scala:772)
        at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
        at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:304)
        at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:258)
        at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:100)
        at kafka.server.KafkaApis.handle(KafkaApis.scala:72)
        at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:42)
        at java.lang.Thread.run(Thread.java:744)

Now I am OK to drop /re-create the topics. I have actually done so several times as part of my investigation (e.g. to ensure no Zookeeper corruption), Any tips on how to get a kafka server up on this out-of-date O/S would be appreciated.

Upvotes: 1

Views: 894

Answers (2)

Prateek Sinha
Prateek Sinha

Reputation: 181

======For Windows====

1.TO generate UUID:- 1.Paste the kafka folder in c drive or any drive 2.go to C:\kafka 3.Open cmd preferablly. Not powershell

kafka-storage.bat random-uuid

2.To set UUID to a variable :- set KAFKA_CLUSTER_ID={use the uuid generated above}

3.To format log directories .\bin\windows\kafka-storage.bat format -t %KAFKA_CLUSTER_ID% -c .\config\kraft\server.properties (pass actual random-uuid generated in step 1 in case of error)

Upvotes: 0

om-nom-nom
om-nom-nom

Reputation: 62835

Misinterpret log.dir is a huge source of pain in kafka in both unixes and windows.

It seems that the exception was caused in the following statement in Partition. replicaManager.highWatermarkCheckpoints(log.dir.getParent) It tries to look up in a map of highWatermarkCheckpoint files to find the key "\kafka8-tmp\kafka-logs", but it doesn't exist. We register the keys using the property value in log.dirs.

source

Make sure you don't have trailing slashes and that new java.io.File("\tmp\kafka-logs").getParent is not distorted (I don't have windows machine right next to me to figure out all this forward/backward slashes by myself).

Upvotes: 2

Related Questions