Reputation: 4442
I am writing a simple server application in Java, and I'm making some benchmarks using Apache benchmark now.
Right after the start the resident memory used by server is about 40M. I make the series of 1000 requests (100 concurrent requests). After each series I check the memory usage again. The result seems very strange to me.
During the first runs of benchmark, the 99% of requests are processed in ≈20 msecs and the rest 1% of them in about 300 msecs (!). Meantime the memory usage grows. After 5-7 runs this growth stops at 120M and requests begin to run much faster - about 10 msecs per request. Moreover, thee time and mem values remain the same when I increase the number of requests.
Why could this happen? If there was a memory leak then my server would require more and more resources. I could only suggest that this is because of some adaptive JVM memory allocation mechanism which increases the heap size in advance.
Upvotes: 0
Views: 580
Reputation: 61148
The heap starts out at Xms
and grows as needed up to the specified limit Xmx
.
It is normal for the JVM to take a while to "warn up" - all sorts of optimisations happen at run time.
If the memory climbs up to a point and then stops climbing, you are fine.
If the memory climbs indefinitely and the program eventually throws an OutOfMemoryError
then you have a problem.
From your description it looks as if the former is true, so no memory leak.
Upvotes: 2