Reputation: 135
We were using 2.1.7 and got OutOfMemory in our client application and the OrientDB server occasionally (kind of once is every two months). So we recently upgraded OrientDB from 2.1.7 to 2.2.11. After the upgrade, Im getting OutOfMemory within a day in the client application which queries data from OrientDB.
In the heapdump, there are 17,014 instances of OSBTreeCollectionManagerRemote and OStorageRemoteAsynchEventListener and it corresponds to 95% of total memory.
Memory problem suspects screenshot
As part of upgrade, Java was also upgraded to 8.
Client (tomcat) JVM params:
-Xmx2048m -XX:MaxPermSize=512m -XX:MaxDirectMemorySize=2048m -XX:+UseParallelOldGC -XX:+HeapDumpOnOutOfMemoryError -XX:+CMSClassUnloadingEnabled
I tried with and without graph connection pool, the results are the same.
Can anyone give more info on how to address this issue. I can share the heapdump if anyone is interested.
Upvotes: 0
Views: 85
Reputation: 126
you are using the throughput collector (ParallelGC) and you also enable the ParallelOld, so -XX:+CMSClassUnloadingEnabled
is not in use (because this is the flag to be used with ConcurrentMarkSweep collector, to unload classes from Permanent/Metaspace when running a simple CMS phase, instead of waiting for a non desired FullGC)
Please describe you client machine, which OS, how much free ram, how many cpu/core, how many java processes are running on client machine ?
Remember java should never be swapped out.
Instead of collecting a list of huge and static heap dump, please start with a trace of dynamic behaviour of your collector, using PrintGC options, and if you suspect a leak, check the increased number of instances printing ascii histograms with jmap -histo:live (which does a FullGC before collecting histogram)
ParallelGC is working with Full GC in old generation, so this is the expected behaviour.
Which OutOfMemory do you get ? Is it in heap or permanent or direct ? I suppose in heap, because of your flags.
Try to understand which segment is full and increase it up to the limit of your cleint machine; To generate a gc.log with process number and timestamp, use : -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$CATALINA_HOME/logs/gclog_%p_%t.log
You can also add -XX:+PrintTenuringDistribution, to see if promotion is done too early, when age of objects is too low. It could be that new size is too small, so should use a bigger new generation with -Xmn
You can understand the current value of each XX flag using also -XX:+PrintFlagsFinal
. See also http://www.oracle.com/technetwork/articles/java/vmoptions-jsp-140102.html
Start java 8u102 with current generation sizes, and increase them where needed :
-Xms2048m -Xmn768m -Xmx2048m -XX:MetaspaceSize=512m -XX:MaxMetaspaceSize=512m -XX:MaxDirectMemorySize=2048m -XX:+UseParallelOldGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -Xloggc:$CATALINA_HOME/logs/gclog_%p_%t.log <br><br>
<br>
For java 7, instead of Metaspace use Permanent: -XX:PermSize=512m -XX:MaxPermSize=512m
Provide all the info about client machine and gclog, if you need a feedback.
If you do by yourself some tuning from gclog and you increase a few times the memory segment which seems to be too small, but you continue to get the same OutOfMemory, then there could be a memory leak in your application, because ParallelGC works with FullGC (so should clean all the released objects/classes/strings)
Upvotes: 1