radumanolescu
radumanolescu

Reputation: 4161

Kafka server change log level

I am trying to change the logging level in Kafka server, because the logs are too verbose. I have looked at which classes log at the DEBUG level and got log line counts e.g.:

kafka.cluster.Partition 1235094
o.apache.kafka.clients.NetworkClient    70375
o.a.k.clients.FetchSessionHandler   69363
kafka.log.LogCleanerManager$    56400

For instance, a log line from the kafka.cluster.Partition class logger looks like this:

21:41:01.041 [data-plane-kafka-request-handler-4] DEBUG kafka.cluster.Partition - [Partition __transaction_state-43 broker=3] Recorded replica 1 log end offset (LEO) position 0 and log start offset 0.

I tried to configure the log4j.properties by adding the following lines:

log4j.logger.kafka.cluster.Partition=INFO
log4j.additivity.kafka.cluster.Partition=false

I was expecting to find that kafka.cluster.Partition only logs at INFO level. Instead, I find that it still logs at DEBUG level.

How can I fix this?

Using Kafka 3.0.0

Sharing the full log4.properties below, as requested in comments. I believe this is quite close to the default version that ships with the Kafka server.

Note that our corporate framework for running any server redirects both stdout and stderr to a single application log file, so it probably does not matter which appender we specify. What I am looking to do is filter which lines get logged, which should not depend on which appender is used.

kafka.logs.dir=logs

log4j.rootLogger=INFO, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

# Turn on all our debugging info
#log4j.logger.kafka.producer.async.DefaultEventHandler=DEBUG, kafkaAppender
#log4j.logger.kafka.client.ClientUtils=DEBUG, kafkaAppender
#log4j.logger.kafka.perf=DEBUG, kafkaAppender
#log4j.logger.kafka.perf.ProducerPerformance$ProducerThread=DEBUG, kafkaAppender
#log4j.logger.org.I0Itec.zkclient.ZkClient=DEBUG
log4j.logger.kafka=INFO, kafkaAppender

log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
log4j.additivity.kafka.network.RequestChannel$=false

#log4j.logger.kafka.network.Processor=TRACE, requestAppender
#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
#log4j.additivity.kafka.server.KafkaApis=false
log4j.logger.kafka.request.logger=WARN, requestAppender
log4j.additivity.kafka.request.logger=false

log4j.logger.kafka.controller=INFO, controllerAppender
log4j.additivity.kafka.controller=false

log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
log4j.additivity.kafka.log.LogCleaner=false

log4j.logger.state.change.logger=INFO, stateChangeAppender
log4j.additivity.state.change.logger=false

Upvotes: 1

Views: 6416

Answers (1)

radumanolescu
radumanolescu

Reputation: 4161

Looking at which JARs are in the classpath, I concluded that our installation of Kafka (plus some custom code which probably brought in dependencies) was actually logging through logback. Found these JARs in the classpath:

logback-classic-1.0.11.jar
logback-core-1.0.11.jar

So I dropped a logback.xml in the classpath instead of modifying the log4j.properties that was probably being ignored.

<configuration>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <root level="info">
        <appender-ref ref="STDOUT" />
    </root>
</configuration>

The result is that logging has been reduced to the level specified in the logback.xml.

Upvotes: 0

Related Questions