Reputation: 309
I'm trying to get a core dump of a proprietary application running on an embedded linux system, for which I wrote some plugins. What I did was:
ulimit -c unlimited
echo "/tmp/cores/core.%e.%p.%h.%t" > /proc/sys/kernel/core_pattern
kill -3 <PID>`
However, no core dump is created. '/tmp/cores' exists and is writable for everyone, and the disk has enough space available. When I try the same thing with sleep 100 &
as an example process and then kill it, the core dump is created.
I tried the example for the pipe syntax from the core manpage, which writes some parameters and the size of the core dump into a file called core.info
. This file IS created, and the size is greater than 0. So if the core dump is created, why isn't it written to /tmp/cores
? To be sure, I also searched for core* on the file system - it's not there. dmesg
doesn't show any errors (but it does if I pipe the core dump to an invalid program).
Some more info: The system is probably based on Debian, but I'm not quite sure. GDB is not available, as well as many other tools - there is only busybox for basic stuff. The process I'm trying to debug is automatically restarted soon after being killed.
So, I guess one solution would be to modify the example program in order to write the dump to a file instead of just counting bytes. But why doesn't it work just normally if there obviously is some data?
Upvotes: 2
Views: 2043
Reputation: 1
If your proprietary application calls setrlimit(2) with RLIMIT_CORE
set to 0, or if it is setuid, no core dump happens. See core(5). Perhaps use strace(1) to find out. And you could install gdb
(perhaps by [cross-] compiling it). See also gcore(1).
Also, check (and maybe set) the limit in the invoking shell. With bash(1) use ulimit
builtin. Otherwise, cat /proc/self/limits
should display the limits. If you don't have bash
you could code a small wrapper in C calling setrlimit
then execve
...
Upvotes: 0