maaartinus
maaartinus

Reputation: 46372

Why is hashCode slower than a similar method?

Normally, Java optimizes the virtual calls based on the number of implementations encountered on a given call side. This can be easily seen in the results of my benchmark, when you look at myCode, which is a trivial method returning a stored int. There's a trivial

static abstract class Base {
    abstract int myCode();
}

with a couple of identical implementation like

static class A extends Base {
    @Override int myCode() {
        return n;
    }
    @Override public int hashCode() {
        return n;
    }
    private final int n = nextInt();
}

With increasing number of implementations, the timing of the method call grows from 0.4 ns through 1.2 ns for two implementations to 11.6 ns and then grows slowly. When the JVM has seen multiple implementation, i.e., with preload=true the timings differ slightly (because of an instanceof test needed).

So far it's all clear, however, the hashCode behaves rather differently. Especially, it's 8-10 times slower in three cases. Any idea why?

UPDATE

I was curious if the poor hashCode could be helped by dispatching manually, and it could a lot.

timing

A couple of branches did the job perfectly:

if (o instanceof A) {
    result += ((A) o).hashCode();
} else if (o instanceof B) {
    result += ((B) o).hashCode();
} else if (o instanceof C) {
    result += ((C) o).hashCode();
} else if (o instanceof D) {
    result += ((D) o).hashCode();
} else { // Actually impossible, but let's play it safe.
    result += o.hashCode();
}

Note that the compiler avoids such optimizations for more than two implementation as most method calls are much more expensive than a simple field load and the gain would be small compared to the code bloat.

The original question "Why doesn't JIT optimize the hashCode like other methods" remains and hashCode2 proofs that it indeed could.

UPDATE 2

It looks like bestsss is right, at least with this note

calling hashCode() of any class extending Base is the same as calling Object.hashCode() and this is how it compiles in the bytecode, if you add an explicit hashCode in Base that would limit the potential call targets invoking Base.hashCode().

I'm not completely sure about what's going on, but declaring Base.hashCode() makes a hashCode competitive again.

results2

UPDATE 3

OK, providing a concrete implementation of Base#hashCode helps, however, the JIT must know that it never gets called, as all subclasses defined their own (unless another subclass gets loaded, which can lead to a deoptimization, but this is nothing new for the JIT).

So it looks like a missed optimization chance #1.

Providing an abstract implementation of Base#hashCode works the same. This makes sense, as it provides ensures that no further lookup is needed as each subclass must provide its own (they can't simply inherit from their grandparent).

Still for more than two implementations, myCode is so much faster, that the compiler must be doing something subobtimal. Maybe a missed optimization chance #2?

Upvotes: 36

Views: 3878

Answers (5)

apangin
apangin

Reputation: 98284

This is a known performance issue: https://bugs.openjdk.java.net/browse/JDK-8014447
It has been fixed in JDK 8.

Upvotes: 3

Dunes
Dunes

Reputation: 40683

I was looking at your invariants for your test. It has scenario.vmSpec.options.hashCode set to 0. According to this slideshow (slide 37) that means Object.hashCode will use a random number generator. That might be why the JIT compiler is less interested in optimising calls to hashCode as it considers it likely it may have to resort to an expensive method call, which would offset any performance gains from avoiding a vtable lookup.

This may also be why setting Base to have its own hash code method improves performance as it prevents the possibility of falling through to Object.hashCode.

http://www.slideshare.net/DmitriyDumanskiy/jvm-performance-options-how-it-works

Upvotes: 0

bestsss
bestsss

Reputation: 12056

hashCode is defined in java.lang.Object, so defining it in your own class doesn't do much at all. (still it's a defined method but it makes no difference)

JIT has several ways to optimize call sites (in this case hashCode()):

  • no overrides - static call (no virtual at all) - best case scenario with full optimizations
  • 2 sites - ByteBuffer for instance: exact type check and then static dispatch. The type check is very simple but depending on the usage it may or may not be predicted by the hardware.
  • inline caches - when few different class instances have been used in the caller body, it's possible to keep them inlined too - that's it some methods might be inlined, some may be called via virtual tables. Inline budget is not very high. This is exactly the case in the question - a different method not named hashCode() would feature the inline caches as there are only four implementations, instead of the v-table
  • Adding more classes going through that caller body results in real virtual call as the compiler gives up.

The virtual calls are not inlined and require an indirection through the table of virtual methods and virtually ensured cache miss. The lack of inlining actually requires full function stubs with parameters passed through the stack. Overall when the real performance killer is the inability to inline and apply optimizations.

Please note: calling hashCode() of any class extending Base is the same as calling Object.hashCode() and this is how it compiles in the bytecode, if you add an explicit hashCode in Base that would limit the potential call targets invoking Base.hashCode().

Way too many classes (in JDK itself) have hashCode() overridden so in cases on not inlined HashMap alike structures the invocation is performed via vtable - i.e. slow.

As extra bonus: While loading new classes the JIT has to deoptimize existing call sites.


I may try to look up some sources, if anyone is interested in further reading

Upvotes: 4

laune
laune

Reputation: 31290

I can confirm the findings. See these results (recompilations omitted):

$ /extra/JDK8u5/jdk1.8.0_05/bin/java Main
overCode :    14.135000000s
hashCode :    14.097000000s

$ /extra/JDK7u21/jdk1.7.0_21/bin/java Main
overCode :    14.282000000s
hashCode :    54.210000000s

$ /extra/JDK6u23/jdk1.6.0_23/bin/java Main
overCode :    14.415000000s
hashCode :   104.746000000s

The results are obtained by calling methods of class SubA extends Base repeatedly. Method overCode() is identical to hashCode(), both of which just return an int field.

Now, the interesting part: If the following method is added to class Base

@Override
public int hashCode(){
    return super.hashCode();
}

execution times for hashCode aren't different from those for overCode any more.

Base.java:

public class Base {
private int code;
public Base( int x ){
    code = x;
}
public int overCode(){
return code;
}
}

SubA.java:

public class SubA extends Base {
private int code;
public SubA( int x ){
super( 2*x );
    code = x;
}

@Override
public int overCode(){
return code;
}

@Override
public int hashCode(){
    return super.hashCode();
}
}

Upvotes: 1

Eric Nicolas
Eric Nicolas

Reputation: 1547

The semantics of hashCode() are more complex than regular methods, so the JVM and the JIT compiler must do more work when you call hashCode() than when you call a regular virtual method.

One specificity has an negative impact on performance : calling hashCode() on a null object is valid and returns zero. This requires one more branching than on a regular call which in itself can explain the performance difference you have constated.

Note that is is true it seems only from Java 7 due to the introduction of Object.hashCode(target) which has this semantic. It would be interesting to know on which version you tested this issue and if you would have the same on Java6 for instance.

Another specificity has a positive impact on performance : if you do not provide your own hasCode() implementation, the JIT compiler will use an inline hashcode computation code which is faster than a regular compiled Object.hashCode call.

E.

Upvotes: -2

Related Questions