TeilaRei
TeilaRei

Reputation: 587

Kafka 1.0 stops with FATAL SHUTDOWN error. Logs directory failed

I have just upgraded to Kafka 1.0 and zookeeper 3.4.10.At first, it all started fine. Stand - alone producer and consumer worked as expected. After I've ran my code for about 10 minutes, Kafka fails with this error:

[2017-11-07 16:48:01,304] INFO Stopping serving logs in dir C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.log.LogManager)

[2017-11-07 16:48:01,320] FATAL Shutdown broker because all log dirs in C:\Kafka\kafka_2.12-1.0.0\kafka-logs have failed (kafka.log.LogManager)

I have reinstalled and reconfigured Kafka 1.0 again, the same thing happened. If I try to restart, the same error occurs.

Deleting log files helps to start Kafka, but it fails again after the short run.

I have been running 0.10.2 version for a long while, and never encountered anything like this, it was very stable over the long periods of time.

I have tried to find a solution and followed instructions in the documentation.

This is not yet a production environment, it is fairly simple setup, one producer, one consumer reading from one topic.

I am not sure if this could have anything to do with zookeeper.

**Update: ** the issue has been posted at Apache JIRA board The consensus so far seems to be that it is a Windows issue.

Upvotes: 30

Views: 49135

Answers (10)

Aakash Mahawar
Aakash Mahawar

Reputation: 51

If above all methods don't work in your case or you have already done everything correctly. Please try to change broker.id in server.properties so this particular error should be gone.

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1

Upvotes: 0

user2897775
user2897775

Reputation: 725

on windows changing to path separators '' resolved the issue, each required a double backslash ' C:\\path\\logs

Upvotes: 0

Md. Kamruzzaman
Md. Kamruzzaman

Reputation: 1905

Simply delete all the logs from :

C:\tmp\kafka-logs

and restart zookeeper and kafka server.

Upvotes: -2

Alex Gleb
Alex Gleb

Reputation: 21

The problem is in a concurrent working with log files of kafka. The task is a delaying of external log files changing between all Kafka threads and

Topic configuration can help:

Map<String, String> config = new HashMap<>();
config.put(CLEANUP_POLICY_CONFIG, CLEANUP_POLICY_COMPACT);
config.put(FILE_DELETE_DELAY_MS_CONFIG, "3600000");
config.put(DELETE_RETENTION_MS_CONFIG, "864000000");
config.put(RETENTION_MS_CONFIG, "86400000");

Upvotes: 1

Mehdi Fracso
Mehdi Fracso

Reputation: 518

What worked for me was deleting both kafka and zookeeper log directories then configuring my log directories path in both kafka and zookeeper server.properties files (can be found in kafka/conf/server.properties) from the usual slash '/' to a backslash '\'

Upvotes: 0

Alexander Oh
Alexander Oh

Reputation: 25641

So this seems to be a windows issue.

https://issues.apache.org/jira/browse/KAFKA-6188

The JIRA is resolved, and there is an unmerged patch attached to it.

https://github.com/apache/kafka/pull/6403

so your options are:

  • get it running on windows and build it with the patch
  • run it in a unix style filesystem (linux or mac)
  • perhaps running it on docker in windows is worth a shot

Upvotes: 1

Ujjwal Pathak
Ujjwal Pathak

Reputation: 686

I've tried all the solutions like

  • Clearing Kafka Logs and Zookeeper Data (issue reoccurred after creating new topic)
  • Changing log.dirs path from forward slash "/" to backward slash "\" (like log.dirs=C:\kafka_2.12-2.1.1\data\kafka ) folder named C:\kafka_2.12-2.1.1\kafka_2.12-2.1.1datakafka was created and the issue did stop and the issue was resolved.

Finally I found this link, you'll get it if you google kafka log.dirs windows

This is on Dzone you'll get it if you google kafka log.dirs windows

Upvotes: 8

nukalov
nukalov

Reputation: 1357

Ran into this issue as well, and only clearing the kafka-logs did not work. You'll also have to clear zookeeper.

Steps to resolve:

  1. Make sure to stop zookeeper.
  2. Take a look at your server.properties file and locate the logs directory under the following entry.

    Example:
    log.dirs=/tmp/kafka-logs/
    
  3. Delete the log directory and its contents. Kafka will recreate the directory once it's started again.

  4. Take a look at the zookeeper.properties file and locate the data directory under the following entry.

    Example:
    dataDir=/tmp/zookeeper
    
  5. Delete the data directory and its contents. Zookeeper will recreate the directory once it's started again.

  6. Start zookeeper.

    <KAFKA_HOME>bin/zookeeper-server-start.sh -daemon <KAFKA_HOME>config/zookeeper.properties
    
  7. Start the kakfa broker.

    <KAFKA_HOME>bin/kafka-server-start.sh -daemon <KAFKA_HOME>config/server.properties
    
  8. Verify the broker has started with no issues by looking at the logs/kafkaServer.out log file.

Upvotes: 40

nick318
nick318

Reputation: 575

Just clean the logs in C:\Kafka\kafka_2.12-1.0.0\kafka-logs and restart kafka

Upvotes: 2

Praveen L
Praveen L

Reputation: 987

If at all, you are trying to execute in Windows machine, try changing path in windows way for parameter log.dirs (like log.dirs=C:\some_path\some_path_kafLogs) in server.properties in /config folder.

By default, this path will be in unix way (like /unix/path/).

This worked for me in Windows machine.

Upvotes: 1

Related Questions