foxgem
foxgem

Reputation: 271

kafka-log4j-appender 0.9 does not work

I added a log4j kafka appender in my log4j.properties, but it did not work as my expected.

Before I post this question, I checked my log4j.properties based on this similar question on stackoverflow, which is about 0.8. However, I'm not luck.

Here is my log4j.properties

log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.Kafka.topic=my-topic
log4j.appender.Kafka.brokerList=localhost:9092
log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n

When I started my app, I could see Kafka producer was started:

[main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Kafka producer started
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.internals.Sender - Starting Kafka producer I/O thread.

But the appender did not work and threw an exception:

[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

I also checked my Kafka + Zookeeper env, it is correct in my log4j.properties. Now, I have no idea about it. And hope someone could give me a hand. the following is the whole output:

[main] INFO  org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
        compression.type = none
        metric.reporters = []
        metadata.max.age.ms = 300000
        metadata.fetch.timeout.ms = 60000
        reconnect.backoff.ms = 50
        sasl.kerberos.ticket.renew.window.factor = 0.8
        bootstrap.servers = [localhost:9092]
        retry.backoff.ms = 100
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        buffer.memory = 33554432
        timeout.ms = 30000
        key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        ssl.keystore.type = JKS
        ssl.trustmanager.algorithm = PKIX
        block.on.buffer.full = false
        ssl.key.password = null
        max.block.ms = 60000
        sasl.kerberos.min.time.before.relogin = 60000
        connections.max.idle.ms = 540000
        ssl.truststore.password = null
        max.in.flight.requests.per.connection = 5
        metrics.num.samples = 2
        client.id =
        ssl.endpoint.identification.algorithm = null
        ssl.protocol = TLS
        request.timeout.ms = 30000
        ssl.provider = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        acks = 1
        batch.size = 16384
        ssl.keystore.location = null
        receive.buffer.bytes = 32768
        ssl.cipher.suites = null
        ssl.truststore.type = JKS
        security.protocol = PLAINTEXT
        retries = 0
        max.request.size = 1048576
        value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        ssl.truststore.location = null
        ssl.keystore.password = null
        ssl.keymanager.algorithm = SunX509
        metrics.sample.window.ms = 30000
        partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
        send.buffer.bytes = 131072
        linger.ms = 0

[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bufferpool-wait-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name buffer-exhausted-records
[main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(nodes = [Node(-1, localhost, 9092)], partitions = [])
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name batch-size
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name compression-rate
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name queue-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name request-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name produce-throttle-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-per-request
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-retries
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name errors
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-size-max
[main] INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.9.0.0
[main] INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : fc7243c2af4b2b4a
[main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Kafka producer started
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.internals.Sender - Starting Kafka producer I/O thread.
...
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

Thanks

Upvotes: 5

Views: 5392

Answers (2)

Mohammad
Mohammad

Reputation: 55

I had a problem like this with both log4j and logback.

When appender level was INFO everything worked fine but when I was changing level to DEBUG I was getting this error after some time: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

The problem was that KafkaProducer itself had trace and debug logs and was trying to append these log to Kafka too, so it was stuck in a loop.

Changing org.apache.kafka package log level to INFO or changing its appender to write logs to file or stdout solved the problem.

Upvotes: 0

foxgem
foxgem

Reputation: 271

Finally, I fixed it. Here is a my new log4j.properties

log4j.rootLogger=DEBUG, Console

log4j.appender.Console=org.apache.log4j.ConsoleAppender
log4j.appender.Console.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Console.layout.ConversionPattern=%d [%t] %-5p %c - %m%n

log4j.category.foxgem=DEBUG, Kafka
log4j.additivity.foxgem=false

log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.Kafka.topic=logTopic
log4j.appender.Kafka.brokerList=localhost:9092
log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n

log4j.logger.io.vertx=WARN
log4j.logger.io.netty=WARN

I also created an example to show how to use this appender on github.

The changes I made:

  1. remove kafka appender from rootLogger. In my old log4j.properties

    log4j.rootLogger=DEBUG, Console, Kafka
    
  2. add a log category for those packages whose log output will go to Kafka

    log4j.category.foxgem=DEBUG, Kafka
    log4j.additivity.foxgem=false
    

I think the reason is: with old rootLogger, the log output from Kafka also went to Kafka, which caused time out.

Upvotes: 5

Related Questions