Reputation: 730
I am running multiple instances of a java web app (Play Framework). The longer the web apps run, the less memory is available until I restart the web apps. Sometimes I get an OutOfMemory Exception. I am trying to find the problem, but I get a lot of contradictory infos, so I am having trouble finding the source.
This are the infos:
EDIT: Here are the JVM settings:
start-stop-daemon
which starts the play framework script, which starts the JVM)This is, how I use it:
start() {
echo -n "Starting MyApp"
sudo start-stop-daemon --background --start \
--pidfile ${APPLICATION_PATH}/RUNNING_PID \
--chdir ${APPLICATION_PATH} \
--exec ${APPLICATION_PATH}/bin/myapp \
-- \
-Dinstance.name=${NAME} \
-Ddatabase.name=${DATABASE} \
-Dfile.encoding=utf-8 \
-Dsun.jnu.encoding=utf-8 \
-Duser.country=DE \
-Duser.language=de \
-Dhttp.port=${PORT} \
-J-Xms64M \
-J-Xmx128m \
-J-server \
-J-XX:+HeapDumpOnOutOfMemoryError \
>> \
$LOGFILE 2>&1
I am picking on instance of the web apps now:
htop
shows 4615M of VIRT
and 338M of RES
.
When I create a heap dump with jmap -dump:live,format=b,file=mydump.dump <mypid>
the file has only about 50MB.
When I open it in Eclipse MAT the overview shows "20.1MB" of used memory (with the "Keep unreachable objects" option set to ON).
So how can 338MB shown in htop shrink to 20.1MB in Eclipse MAT?
I don't think this is GC related, because it doesn't matter how long I wait, htop always shows about this amount of memory, it never goes down.
In fact, I would assume that my simple app does not use more then 20MB, mabye 30MB.
I compared to heap dumps with a age difference of 4 hours with Eclipse MAT and I don't see any significant increase in objects.
PS: I added the -XX:+HeapDumpOnOutOfMemoryError
option, but I have to wait for 5-7 days until it happens again. I hope to find the problem earlier with you helping me interpreting my numbers.
Thank you, schube
Upvotes: 0
Views: 113
Reputation: 298539
The heap is the memory containing Java objects. htop
surely doesn’t know about the heap. Among the things that contribute to the used memory, as reported by VIRT
are
When you dump the heap, it will contain live Java objects, plus meta information allowing to understand the content, like class and field names. When a tool calculates the used heap, it will incorporate the objects only. So it will naturally be smaller than the heap dump file size. Also, this used memory often does not contain the unusable memory due to padding/alignment, further, the tools sometimes assume the wrong pointer size, as the relevant information (32 bit architecture vs 64 bit architecture vs compressed oops) is not available in the heap dump. These errors may sum up.
Note that there might be other reasons for an OutOfMemoryError
than having too much objects in the heap. E.g. there might be too much meta information, due to a memory leak combined with dynamic class loading or too many native I/O buffers…
Upvotes: 4