El Marce
El Marce

Reputation: 3344

JMH. Making the Microbenchmark's results public

I have read that in order to avoid dead code elimination in microbenchmarks the most common solutions are:

  1. Return the result of the calculation
  2. Consume the result using a black hole.

My question is:

Could be possible to avoid dead code elimination by placing the calculation's results in a public variable?

EDIT:

Thanks to Shipilev's answer I realized that returning the results or consuming them using blackholes must be done properly in order to avoid dead code elminination (DCE), as explained in the JMH examples.

Therefore, I will rewrite my question to make it clearer:

In cases when returning the result of the calculation or consuming it with blackwholes is enough to avoid DCE, is also enough to place results in a public variable?

I have run a variation of the example JMHSample_08_DeadCode like this:

public double sink;

@Benchmark
public void measureRightPerhaps_2() {
    // What could possibly go wrong?
    sink = Math.log(x);
}

and from the results it seems so:

Benchmark              Mode  Cnt   Score   Error  Units
baseline               avgt   15   0,458 � 0,001  ns/op
measureRight           avgt   15  33,233 � 0,268  ns/op
measureRightPerhaps_2  avgt   15  30,177 � 0,603  ns/op
measureWrong           avgt   15   0,459 � 0,001  ns/op
measureWrong_2         avgt   15   0,917 � 0,001  ns/op

Upvotes: 2

Views: 665

Answers (1)

Aleksey Shipilev
Aleksey Shipilev

Reputation: 18847

This is very simple to answer: nope, that's not safe, unless you are controlling the environment, verifying that no ill effects are happenning, etc. The simplest scenario how it breaks is for optimizer to figure there are several consecutive stores to the field, and eliminate everything but the latest store. E.g. take a well-known JMHSample_08_DeadCode, and add this test:

public double sink;

@Benchmark
public void measureWrong_2() {
    // What could possibly go wrong?
    sink = Math.log(x);

    // Imagine this happens somewhere downstream.
    // Or, you are sinking in the loop.
    // Or, measureWrong_2 had inlined and the very next Math.log will sink.
    sink = Math.PI;
}

...then run it, and weep:

Benchmark                             Mode  Cnt   Score   Error  Units
JMHSample_08_DeadCode.baseline        avgt    5   0.251 ± 0.001  ns/op
JMHSample_08_DeadCode.measureRight    avgt    5  19.034 ± 0.033  ns/op
JMHSample_08_DeadCode.measureWrong    avgt    5   0.251 ± 0.001  ns/op
JMHSample_08_DeadCode.measureWrong_2  avgt    5   0.326 ± 0.001  ns/op

Morale: Unless you know what you are doing, don't break away from what JMH docs mention as the supported way to avoid DCE.

UPDATE: Of course, you can find a corner case when some other technique is working. But, even if something is working at the moment, you cannot be sure it would work with some other innocious change. That's the whole point of using Blackholes -- they are working always. E.g. more sophisticated case JMHSample_09_Blackholes, where you can "accidentally" make two back-to-back stores in sink:

@Benchmark
public void measureRight_2(Blackhole bh) {
    bh.consume(Math.log(x1));
    bh.consume(Math.log(x2));
}

public double sink;

@Benchmark
public void measureWrong_2() {
    sink = Math.log(x1);
    sink = Math.log(x2);
}

...and:

JMHSample_09_Blackholes.measureRight_1  avgt    5  35.837 ± 0.043  ns/op
JMHSample_09_Blackholes.measureRight_2  avgt    5  38.378 ± 0.071  ns/op
JMHSample_09_Blackholes.measureWrong    avgt    5  19.012 ± 0.009  ns/op
JMHSample_09_Blackholes.measureWrong_2  avgt    5  16.659 ± 0.018  ns/op

Oops. Blackholes are working, and sinks are not: that's a counter-example for your updated question. Unless you verify every benchmark using that trick, and scrutinize the code that you are using the trick as intended, you cannot be sure the trick is working. My point is that you'd better spend time figuring out the issues specific to your benchmark (99% of all benchmark errors), rather than trying to cheat a few nanoseconds off the harness. Priorities!

Maintainer's perspective now. JMH development tracks what is being done in updated JVMs, as they evolve. Blackholes are getting fixed along the way. The code shapes for JMH benchmark stubs are getting corrected. But, they are verified on the valid benchmarks that are using the advertised guarantees. We have no reason to care about benchmarks that do something on their own. If, e.g. compilers would be able to inline @Benchmark, unroll the external loop that JMH is doing, then it will set up sink for the trouble discovered above. In other words, if you want future-proof-ness for your code, use known and documented APIs, not some tricks.

Upvotes: 2

Related Questions