Reputation: 6933
Im having a weird scenario where Im attempting to determine the root cause for some process restarts. We collected JFR for a period and I see that the heap does not grow over time. The maximum size of the heap remains below 2 GiB at all times.
However, I see that the total allocation for a class byte[]
is around 180+ GiB while the total physical memory is just 128 GB. The total allocation percentage for the same class is 58.4%. However the max live size is 171 MiB.
Also, I see that the "Used Size" is also quite high(125 GiB). And I see this message in the automated analysis section of the JFR profile:
The maximum amount of used memory was 99.7 % of the physical memory available.
The maximum amount of memory used was 125 GiB. This is 99.7 % of the 126 GiB of physical memory available. Having little free memory may lead to swapping, which is very expensive. To avoid this, either decrease the memory usage or increase the amount of available memory.
Im very confused here. Does total allocation mean the amount of data in the physical memory? Or is it just the total amount of memory that was allocated but has since then been garbage collected?
Upvotes: 0
Views: 646
Reputation: 7069
The message was generated by "Low on Physical Memory" rule. You can verify that's the rule by looking at the message text and implementation.
From what I can see, it uses the OS_MEMORY_SUMMARY event and not events related to object allocation.
Upvotes: 0