Jary zhen
Jary zhen

Reputation: 447

log.debug("Coordinator discovery failed for group {}, refreshing metadata", groupId) with kafka 0.11.0.x

I'm using Kafka (version 0.11.0.2) server API to start a kafka broker in localhost.As it run without any problem.Producer also can send messages success.But consumer can't get messages and there is nothing error log in console.So I debugged the code and it looping for "refreshing metadata".

Here is the source code

while (coordinatorUnknown()) {
        RequestFuture<Void> future = lookupCoordinator();
        client.poll(future, remainingMs);

        if (future.failed()) {
            if (future.isRetriable()) {
                remainingMs = timeoutMs - (time.milliseconds() - startTimeMs);
                if (remainingMs <= 0)
                    break;

                log.debug("Coordinator discovery failed for group {}, refreshing metadata", groupId);
                client.awaitMetadataUpdate(remainingMs);
            } else
                throw future.exception();
        } else if (coordinator != null && client.connectionFailed(coordinator)) {
            // we found the coordinator, but the connection has failed, so mark
            // it dead and backoff before retrying discovery
            coordinatorDead();
            time.sleep(retryBackoffMs);
        }

        remainingMs = timeoutMs - (time.milliseconds() - startTimeMs);
        if (remainingMs <= 0)
            break;
    }

Adddtion: I change the Kafka version to 0.10.x,its run OK.

Here is my Kafka server code.

   private static void startKafkaLocal() throws Exception {
    final File kafkaTmpLogsDir = File.createTempFile("zk_kafka", "2");
    if (kafkaTmpLogsDir.delete() && kafkaTmpLogsDir.mkdir()) {
        Properties props = new Properties();
        props.setProperty("host.name", KafkaProperties.HOSTNAME);
        props.setProperty("port", String.valueOf(KafkaProperties.KAFKA_SERVER_PORT));
        props.setProperty("broker.id", String.valueOf(KafkaProperties.BROKER_ID));
        props.setProperty("zookeeper.connect", KafkaProperties.ZOOKEEPER_CONNECT);
        props.setProperty("log.dirs", kafkaTmpLogsDir.getAbsolutePath());
        //advertised.listeners=PLAINTEXT://xxx.xx.xx.xx:por
  // flush every message.

        // flush every 1ms
        props.setProperty("log.default.flush.scheduler.interval.ms", "1");
        props.setProperty("log.flush.interval", "1");
        props.setProperty("log.flush.interval.messages", "1");
        props.setProperty("replica.socket.timeout.ms", "1500");
        props.setProperty("auto.create.topics.enable", "true");
        props.setProperty("num.partitions", "1");

        KafkaConfig kafkaConfig = new KafkaConfig(props);

        KafkaServerStartable kafka = new KafkaServerStartable(kafkaConfig);
        kafka.startup();
        System.out.println("start kafka ok "+kafka.serverConfig().numPartitions());
    }
}

Thanks.

Upvotes: 0

Views: 2919

Answers (1)

Mickael Maison
Mickael Maison

Reputation: 26885

With kafka 0.11, if you set num.partitions to 1 you also need to set the following 3 settings:

offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

That should be obvious from your server logs when running 0.11.

Upvotes: 1

Related Questions