vach
vach

Reputation: 11377

fast heap dumps but on out of memory

We all know good old flag +HeapDumpOnOutOfMemoryError for taking heap dumps when JVM runs out of memory. Problem with this is that for large heaps this takes more and more time.

There is a way to take fast heap dumps using GNU Debugger... You effectively take the core file of the process (very fast) then convert it to heapdump format using jmap... this is the slowest part of the work.

However this is only if you take it manually, when your java apps run in containers there is usually a fixed timeout until your app will be killed non gracefully... for kube i believe it is 30 seconds by default.

For many reasons i do not want to extend this timeout to larger number. Is there a way to trigger only core file dump when out of java runs out of memory? or we are just limited with whatever +HeapDumpOnOutOfMemoryError flag offers?

Upvotes: 0

Views: 701

Answers (1)

Baran Bursalı
Baran Bursalı

Reputation: 360

I can think of 2 possible solutions but it wont be only for out of memory situations but crashes too:

  1. You can use java's -XX:OnError option to run your own script or gcore, (gdb) generate-core-file(depends to your OS) to create an core dump that you can later use a debugger(like gdb) to attach to it.

  2. You can enable auto core dumps in your OS in the way it provides. For Redhat:

To enable: Edit the related line in file /etc/systemd/system.conf as DefaultLimitCORE=infinity

Reboot and remove the limits of core dump by ulimit -c unlimited.

When your application crashes, the dumb must be created in its working directory.

Upvotes: 1

Related Questions