Pasha
Pasha

Reputation: 1912

How to correctly create performance tests for a custom logging solution?

At work, I took an initiative to create from scratch a logging library to be used across internal services and projects.

To date, I have created a proof of concept with a custom class that extends PatternLayout and obfuscates certain fields within the output log entries.

Now the main task is to assert that performance is acceptable and my solution can handle millions of log entries without messages dropped off.

I decided to write performance tests for my custom logback, default logback layout, and Log4j2 in order to compare the output results. I have made unit tests employing jmh.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

    <appender name="FILE_APPLY_RULES" class="ch.qos.logback.core.FileAppender">
        <file>logs\log.txt</file>
        <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
            <layout class="com.ingenico.epayments.logging.rule.LoggingRulesApplicator">
                <pattern>%msg%n</pattern>
                <applyRules>true</applyRules>
                <loggingRules>src/test/resources/logging-rules.json</loggingRules>
            </layout>
        </encoder>
    </appender>

    <appender name="ASYNC_FILE_APPLY_RULES" class="ch.qos.logback.classic.AsyncAppender">
        <discardingThreshold>0</discardingThreshold>
        <appender-ref ref="FILE_APPLY_RULES" />
        <queueSize>256</queueSize>
        <includeCallerData>false</includeCallerData>
        <neverBlock>true</neverBlock>
    </appender>

    <root level="INFO">
        <appender-ref ref="ASYNC_FILE_APPLY_RULES" />
    </root>

</configuration>

@Slf4j
public class LogbackTest {

    @Test
    public void launchBenchmark() throws RunnerException {
        Options opt = new OptionsBuilder()
                .include(this.getClass().getName() + ".*")
                .timeUnit(TimeUnit.MICROSECONDS)
                .warmupTime(TimeValue.seconds(1))
                .warmupIterations(2)
                .measurementTime(TimeValue.seconds(1))
                .measurementIterations(2)
                .threads(2)
                .forks(1)
                .shouldFailOnError(true)
                .shouldDoGC(true)
                .addProfiler(StackProfiler.class)
                .addProfiler(HotspotRuntimeProfiler.class)
                .build();
        new Runner(opt).run();
    }

    @BenchmarkMode(Mode.All)
    @Measurement(iterations = 200, time = 200, timeUnit = MILLISECONDS)
    @Benchmark
    public void logBenchmark() {
        log.info("{\"id\":\"12345\",\"cardData\":{\"bar\":\"bar\",\"foo\":\"foo\",\"number\":\"number\"}}");
    }

}

The tests are run and in the output, I see a lot of result numbers but I don't know which values should be compared and how can I assert that if I have good performance results it's not because messages have been dropped off?

Could you please give me a hint of how to prepare a valid performance test?

Upvotes: 1

Views: 482

Answers (1)

Karol Dowbecki
Karol Dowbecki

Reputation: 44962

You have an asynchronous AsyncAppender appender that can potentially drop the messages in the queue. Meanwhile the JMH benchmark expects that code in the @Benchmark annotated method is executed synchronously to calculate how long it takes. This won't work.

IMO you are attempting to test too many things at once e.g. the whole Logback stack. If you developed a custom layout class LoggingRulesApplicator start by writing a JMH bechmark just for this class.

Take a look at existing log4j2 JMH benchmarks. The setup there is simplified by using a NOOP appender which you probably should have as well.

Upvotes: 2

Related Questions