Mark
Mark

Reputation: 1852

Java CMS garbage collector log output

I am using the Java CMS garbage collector and trying to understand log lines like:

22609.787: [GC 22609.788: [ParNew: 1116101K->79200K(1310720K), 0.2369136 secs]     1551730K->516431K(6029312K), 0.2379422 secs] [Times: user=1.68 sys=0.02, real=0.24 secs] 
22610.741: [Full GC 22610.741: [CMS: 437230K->278442K(4718592K), 14.8596841 secs]   573355K->278442K(6029312K), [CMS Perm : 241468K->236967K(241856K)], 14.8694544 secs]   [Times: user=14.80 sys=0.13, real=14.87 secs] 
22635.415: [GC 22635.416: [ParNew: 1048576K->43613K(1310720K), 0.0904065 secs] 1327018K->322055K(6029312K), 0.0914701 secs] [Times: user=0.45 sys=0.00, real=0.09 secs] 

Based on reading http://blogs.oracle.com/poonam/entry/understanding_cms_gc_logs the CMS will do a full garbage collection in various situations (e.g. if "promotion in a scavenge attempt fails"), but according to that blog, it would log the reason for the full GC.

Instead I'm seeing adhoc full GCs. It definitely is CMS because there are other CMS log entries.

Are there reasons it would do a full GC but not log the reason why?

EDIT: Sorry, multiple java installations (I've inherited this setup). It's actually using jdk1.6.0_27

EDIT Unfortunately the JVM parameters are wrapped up in config files (this is a commercial app that runs on Tomcat), but I'm pretty sure they are:

min.heapsize=6144m
max.heapsize=6144m
-Xss=512k
-XX:MaxPermSize=512m
-XX:NewSize=1536m 
-XX:MaxNewSize=1536m 
-XX:SurvivorRatio=4 
-XX:+UseConcMarkSweepGC 
-XX:+UseParNewGC 
-XX:+UseTLAB 
-XX:+UseCompressedOops 
-XX:+PrintVMOptions 
-XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps 
-XX:+PrintGCTaskTimeStamps
-XX:+PrintCommandLineFlags 
-XX:+PrintGCApplicationStoppedTime 
-XX:StackShadowPages=20 
-XX:+DisableExplicitGC 

Upvotes: 4

Views: 2850

Answers (1)

Peter Lawrey
Peter Lawrey

Reputation: 533492

The ConcurrentMarkSweepGC will do GC when it determines it about to fill up or its low water mark has been reached. (I don't believe its the case here)

What is more likely is that some code is calling System.gc() directly. E.g. RMI will call it every hour by default. You can try using -XX:+DisableExplicitGC to see if it makes a difference. If it does, I would trace down the cause and stop it from happening (some people just leave this option on).

Another possible cause is running low on free direct memory. Direct memory is only cleaned up on a GC but doesn't contribute much to the heap size so it does look like you need a GC. When you run out of direct memory it has to call System.gc() to free up those ByteBuffers which are using it. The way I get around it is by cleaning up direct and memory mapped ByteBuffers explicitly so I don't run out.

Also I would try Java 6 update 30 or even Java 7 update 2, as it may have more predictable behaviour (even if you can't use this it can be informative)

You could try which can tell you what a GC was triggered (but not always)

`jstat -gccause {pid} 10s`

Upvotes: 2

Related Questions