sva605
sva605

Reputation: 1681

Why does JMH shows the same results for different implementations?

JMH shows the same results for different methods no matter whether those methods contain any code or not.

Example 1: empty method to be tested

public class MyBenchmark {
    public static void main(String[] args) throws Exception {
        org.openjdk.jmh.Main.main(args);
    }

    @Fork(value = 1, warmups = 0)
    @Benchmark
    @BenchmarkMode(Mode.AverageTime)
    @Warmup(iterations = 5)
    public String run() {
        return "done";
    }
}

The result of running this code is 1e-8 s/op.

Example 2: method with some work to do:

public class MyBenchmark {
    public static void main(String[] args) throws Exception {
        for (int i = 0; i < 10000000; i++) {
            list.add(i);
        }
        org.openjdk.jmh.Main.main(args);
    }

    private static List<Integer> list = new ArrayList<>();

    @Fork(value = 1, warmups = 0)
    @Benchmark
    @BenchmarkMode(Mode.AverageTime)
    @Warmup(iterations = 5)
    public String run() {
        List<Integer> copy = new ArrayList<>();
        for (Integer item : list) {
            copy.add(item);
        }
        return "done";
    }
}

The result is the same: 1e-8 s/op.

So, the benchmark is clearly not working. What might be wrong?

Upvotes: 0

Views: 970

Answers (1)

Ivan Mamontov
Ivan Mamontov

Reputation: 2924

You are using an incorrect time scale - seconds per operations. It seems too big for your no op test. Just add the following parameter @OutputTimeUnit(TimeUnit.NANOSECONDS) to your test:

import org.openjdk.jmh.annotations.*;

import java.util.concurrent.TimeUnit;

public class MyBenchmark {
    public static void main(String[] args) throws Exception {
        org.openjdk.jmh.Main.main(args);
    }

    @Fork(value = 1, warmups = 0)
    @Benchmark
    @BenchmarkMode(Mode.AverageTime)
    @Warmup(iterations = 5)
    @OutputTimeUnit(TimeUnit.NANOSECONDS)
    public String run() {
        return "done";
    }
}

with following result:

# Run complete. Total time: 00:00:25

Benchmark        Mode  Cnt  Score   Error  Units
MyBenchmark.run  avgt   20  5.390 ± 0.264  ns/op

Regarding to your second example, it contains almost all possible issues/pitfalls:

  • dead code elimintaion - JVM is smart enough to detect loop without side effects and remove it from method body.
  • wrong measurement
  • incorrect initialization - JMH has special annotations Setup and State for proper benchmark initialization

Here is a "correct" version of your example(I just removed oblivious mistakes):

import org.openjdk.jmh.annotations.*;

import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;

@State(Scope.Benchmark)
@Fork(value = 1)
public class MyBenchmark {

    private List<Integer> list;

    @Setup
    public void init() {
        list = new ArrayList<>();
        for (int i = 0; i < 10000000; i++) {
            list.add(i);
        }
    }

    @Benchmark
    @BenchmarkMode(Mode.AverageTime)
    @Warmup(iterations = 5)
    @OutputTimeUnit(TimeUnit.NANOSECONDS)
    public Object run() {
        List<Integer> copy = new ArrayList<>();
        for (Integer item : list) {
            copy.add(item);
        }
        return copy;
    }
}

with latency:

# Run progress: 0.00% complete, ETA 00:00:25
# Fork: 1 of 1
# Warmup Iteration   1: 2488116493.000 ns/op
# Warmup Iteration   2: 201271178.600 ns/op

In order to understand all possible issues with microbenchmarking please read the following examples.

Upvotes: 3

Related Questions