Billybong
Billybong

Reputation: 707

How to make sense of the "new" Java flight recorder ObjectAllocationSample event?

While profiling our application on OpenJDK 16.1 I'm seeing some weird allocation samples recorded in the ObjectAllocationSample event.

The very first sample event is this one which initially gave me a bit of scare:

jdk.ObjectAllocationSample {
  startTime = 22:22:12.027
  objectClass = java.util.ArrayList (classLoader = bootstrap)
  weight = 6.1 GB
  eventThread = "My thread" (javaThreadId = 52)
  stackTrace = [
    jdk.jfr.internal.PlatformRecorder.getListeners() line: 169
    jdk.jfr.internal.PlatformRecording.notifyIfStateChanged(RecordingState, RecordingState) line: 496
    jdk.jfr.internal.PlatformRecording.start() line: 150
    jdk.jfr.Recording.start() line: 184
    myapp.JfrProfiler.start() line: 152
    ...
  ]
}

I have a hard time believing that the list of JFR listeners is 6.1GB large, and the method is only called once, so what does the stacktrace in the event mean in this case? Is this only one of the callsites where we allocate ArrayList's, and it happens to be the first one after starting the recording? If so, is the weight including all allocations even prior to starting the recording?

What about later samples for the same objectClass? Can we trust that the stacktraces gives us a good indication from where the allocations happen or is it just a random place where allocation occurs?

Tools such as JProfiler uses the stacktrace to give a hint as to which methods are allocation hot spots, making i.e the callsite above stand out significantly even if it's not where most of the allocations happen.

Upvotes: 2

Views: 1235

Answers (1)

Kire Haglin
Kire Haglin

Reputation: 7069

It could be the following bug. If a recording is started during runtime, the first sample will contain information from before the recording was started.

See https://bugs.openjdk.org/browse/JDK-8307488

Explanation of the ObjectAllocationSample event

The field represents the sample weight, not the size.

If you would run the application for a very long time, this is how much this call site / stack trace would contribute to the allocation pressure.

To keep overhead low, the sampler doesn't instrument every allocation, in which case the size would be reasonable to emit.

Instead it piggybacks on how allocation is done inside the JVM. To avoid taking the "heap lock" all the time, allocation is done in Thread Local Allocation Buffers (TLABs). The size of a TLAB is determined by the JVM, but typically increases when allocation increases. It can be a few kilobytes to several megabytes. When the thread runs out of a TLAB, it goes into a slow path where it requests more memory, possibly triggering a garbage collection. At that point the JVM decides if it should take the sample or not.

The size of the TLAB determines the weight of the sample. If an object of a certain type is allocated seldom, it's unlikely to hit the TLAB threshold. If it's allocated often, it's likely to hit the sample point. Furthermore, large arrays may be allocated outside the TLABs and then the weight become the size of the array. To keep overhead low, there is also a limitation on how many events to emit per second, by default 150, so the weight could be the sum over several sample attempts.

That said, 6.1 GB sounds rather high. With 150 events per second, that would mean an allocation rate of roughly 1000 GB/s. The event is new for JDK 16, so It could be a bug, uninitialized memory or something. Also seems unlikely that you hit an object that is only allocated once with such a high allocation rate.

Upvotes: 4

Related Questions