Reputation: 1767
I have an application that is responsible for archiving old applications, which will do a large number of applications at a time and so it will need to run for days at a time.
When my company developed this they did a fair bit of performance testing on it and they seemed to get decent numbers out of this, but I have been running an archive for a customer recently and it seems to be running really slowly and the performance seems to be degrading even more longer it runs.
There does not appear to be a memory leak, as since I have monitoring it with jconsole there still is plenty of memory available and does not appear to be shrinking.
I have noticed however that the survivor space and tenured gen of the heap can very quickly fill up until a garbage collection comes along and clears it out which seems to be happening rather frequently which I am not sure if that could be a source of the apparent slow down.
The application has been running now for 7 days 3 hours and according to jconsole it has spent 6 hours performing copy garbage collection (772, 611 collections) and 12 hours and 25 minutes on marksweep compaction's (145,940 collections).
This seems like a large amount of time to be spent on garbage collection and I am just wondering if anyone has looked into something like this before and knows if this is normal or not?
Edits
Local processing seems to be slow, for instance I am looking at one part in the logs that took 5 seconds to extract some xml from a SOAP envelope using xpath which it then appends to a string buffer along with a root tag.. that's all it does. I haven't profiled it yet, as this is running in production, I would either have to pull the data down over the net or set up a large test base in our dev environment which may end up having to do.
Running Java HotSpot Client VM version 10.0-b23
Really just need high throughput, haven't configured any specific garbage collection parameters, would be running what ever the defaults would be. Not sure how to find what collectors would be in use?
Fix
End up getting a profiler going on it, turned out the cause of the slow down was some code that was constantly trimming lines off a status box outputting logging statements which was pretty badly done. Should have figured the garbage collection was symptom from constantly copying the status text into memory, rather than an actual cause.
Cheers Guys.
Upvotes: 11
Views: 8721
Reputation: 10637
There is a balance you will try and maintain between JVM heap footprints and GC time. Another question might be do you have heap (and generations) (under-)allocated in such a way which mandates too frequent GCing. When deploying muti-tenant JVMs on these system, I've tried to maintain the balance to under 5% total GC time along with aggressive heap shrinkage to keep footprint low (again, multi-tenant). Heap and generations will mostly ALWAYS fill as to avoid frequent GCing to whatever it is set. Remove the -Xms
parameter to see a more realistic steady state (if it has any idle time)
+1 to the suggestion on profiling though; it may be something not related to GC, but code.
Upvotes: 2
Reputation: 9337
Without proper profiling, this is a guessing game. As an anectode, though, a few years ago a web app I was involved with at the time suddenly slowed down (response time) by a factor of 10 after a JDK upgrade. We ended up chasing it down to an explicit GC invocation added by a genious who was no longer with the company.
Upvotes: 2
Reputation: 70564
According to your numbers, total garbage collection time was about 18 hours out of 7 days execution time. At about 10% of total execution time, that's slightly elevated, but even if you managed to get this down to 0%, you'd only have saved 10% execution time ... so if you're looking for substantial savings, you should better look into the other 90%, for instance with a profiler.
Upvotes: 4