Reputation: 31
I have a jar running in background on a Ubuntu server.
In a certain moment, the application start consumming too much CPU (400%) and 4 child process stay in R state:
HTOP state before/after problem
N.B: the problem is generating NOT because of an amount of using, it is caused just after a certain time (3-4 days). We have to kill java and re-run it.
EDIT ADD GC Log:
I did java -verbose:gc and here is what I got between restarting the app and when the application show the problem explained before.
EDIT ADD OLD GEN Log:
In the first graph, the x-axis is not in second, because the log didn't give the timestamp when the GC is done. Also in the ScreenShot in the bottom, the Visual GC runned in Visual VM in the period where the problem is occuring.
Here is the log dump :
http://www.filedropper.com/threaddump2
Upvotes: 1
Views: 185
Reputation: 2051
One possible cause (and this is pure speculation since we do not have any information to go on) is that the java process is running out of memory and starts doing back-to-back full garbage collections, which are cpu-intensive. Enable logging to determine if you get an OutOfMemory error and if you do, enable gc logging and try to determine the source of the memory leak.
After looking at your graph I would say that you definitely have resource leak if the x-axis is anything larger than seconds. It would be interesting if you could post the behaviour of the tenured generation at the end stage and at higher resolution.
Ok, looking at the new graphs I am a bit surprised. I cannot reconcile the behaviour of the first graph with the new graphs. You do not seem to have any memory problems at all. Old gen is basically vacant as is young gen. Do you have logs from your application?
The new graphs do not give any more meaningful information, you might consider doing a thread dump when the app goes haywire. Use jstack <pid> >> thread_dump.log
Upvotes: 2