matthewatabet
matthewatabet

Reputation: 1481

Extracting stacktrace from large crashes

I have a crash handler installed at:

/proc/sys/kernel/core_pattern

Which pipes the incoming coredump to a file and then extracts the stacktrace via gdb.

The problem is that sometimes these coredumps can be very very large (often over 30GB). In these cases, users wait and wait while the coredump is written to disk before the application hangs.

Any suggestions on how to handle these very large coredumps in a non-blocking fashion? I don't care about the coredump per se, the stacktrace is what is valuable.

I don't have access to source, so answers such as https://serverfault.com/questions/627747/non-blocking-core-dump-on-linux aren't that practical.

Thanks!

Upvotes: 1

Views: 219

Answers (1)

Employed Russian
Employed Russian

Reputation: 213385

I don't have access to source, so answers such as https://serverfault.com/questions/627747/non-blocking-core-dump-on-linux aren't that practical.

If the application is dynamically linked, you can LD_PRELOAD google coredumper into it, so the answer is somewhat practical.

Another possible alternative is http://github.com/tbricks/tbstack.

Certainly dumping 30GB of RAM just to throw most of it away is wasteful.

Upvotes: 2

Related Questions