Rafe
Rafe

Reputation: 558

reduce the internal log level for Log4j2 (with Kafka Appender)

I'm using Log4j2 (v2.17.2) to send information directly to kafka, and am using XML to create the configuration (as many articles mention that XML can handle a lot more configuration options - properties do not). The issue that I have is that my console is filled to the brim with irrelevant INFO log-lines (as an example):

[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.1.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 37edeed0777bacb3
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1652851625060
[pool-2-thread-1] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-logInfo-1, groupId=logInfo] Subscribed to topic(s): logInfo
[pool-2-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-logInfo-1, groupId=logInfo] Cluster ID: 37Prit7oRwSnQ-CX5_Iwvw

I've tried all the techniques out from programmatically-change-log-level-in-log4j2 with no change to the logging:

Configurator.setLevel("org.apache.kafka", Level.WARN);

Has anyone had any luck with getting the log level to reduce from INFO? I really don't want that level of information to trawl through to find the issues!

--edit--

I've gone through to explicitly set per class:

Configurator.setLevel("org.apache.kafka.clients.producer.ProducerConfig", Level.ERROR);
Configurator.setLevel("org.apache.kafka.clients.consumer.ConsumerConfig", Level.ERROR);

with the same results in the log:

[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 
[Thread-1] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 

Upvotes: 1

Views: 2329

Answers (3)

Ma3oxuct
Ma3oxuct

Reputation: 43

If you are using log4j2 and also trying to do this with log4j2.properties, this is what worked for me:

logger.kafka-org.name = org.apache.kafka
logger.kafka-org.level = warn
logger.kafka.name = kafka
logger.kafka.level = warn
logger.kafka-state.name = state.change.logger
logger.kafka-state.level = warn
logger.zookeeper.name = org.apache.zookeeper
logger.zookeeper.level = warn
logger.curator.name = org.apache.curator
logger.curator.level = warn

I hope this saves some poor soul trying to do the same thing the two hours it took me to figure this out. You need to map the logger by name, which is what the implementer calls it (often package, but not always), then set the level.

Upvotes: 4

Srohr
Srohr

Reputation: 11

Another way since kafka uses slf4j is to add into the pom.xml

<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-slf4j-impl</artifactId>
    <version>2.11.1</version>
</dependency>

Doing so in your log should already appear the kafka entries. Then into the log4j2.xml you can control the log level doing so:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="error" shutdownHook="disable">
    <Appenders>
        <Console name="Console" target="SYSTEM_OUT">-->
            <PatternLayout pattern="[%d{yyyy-MM-dd HH:mm:ss,SSS}][%c{1}:%L][%-5p] %m%n" />
        </Console>
    </Appenders>
    <Loggers>
        <Root level="${sys:log.level:-DEBUG}">
            <AppenderRef ref="Console" />
        </Root>
        <Logger name="org.apache.kafka" level="WARN"/>
    </Loggers>
</Configuration>

The important line is: <Logger name="org.apache.kafka" level="WARN"/>

Upvotes: 1

Rafe
Rafe

Reputation: 558

It turns out that the answer is very simple: the kafka-clients-3.1.0.jar (required to get the kafka appender to work) uses slf4j for its logging, so doesn't respond to any changes to log4j levels!

The following line fixes the issue:

System.setProperty(org.slf4j.impl.SimpleLogger.DEFAULT_LOG_LEVEL_KEY, "ERROR");

Upvotes: 0

Related Questions