Reputation: 239
we are currently load testing our application, a java rest services webapp. At first glance the performance was poor, but we have no comparison point.
The environment : - ubuntu 12.04 server on Amazon micro EC2 instance - tomcat 7, maxThreads=500, Xmx=450m - java 6, installed by default.
The webapp / service : A simpled webapp with on service called "getVersion". It return a string "1" - there is no processing (db, file etc.) - just return "1".
The test : We load tested it with multimechanize : 100 concurrent users for 60 seconds. We had effectively 76 requests per second.
The result : between 0,X and 5 seconds to respond. 5 seconds appearing once every 10 requests approximatively.
We thought Tomcat would handle easily this ammount of concurrent requests. Is this normal ? Is there anything to tune besides memory, maxThreads ?
Upvotes: 0
Views: 1782
Reputation: 1
Your issue could be due to Micro instances approach:
They get up to 2 ECUs, but they do not maintain that process power. If you abuse your cpu share, your instance is given less processing for some time.
Upvotes: 0
Reputation: 313
I'd try the same load test on your local laptop first, to rule out the micro instance as the issue (If its a dead simple app as you say that shouldn't be hard).
It's also easy to run up jvisualvm then and profile a little. Occasional very slow outliers smell like GC issues tome.
(Those numbers sound pretty dreadful, if it helps. I'd expect a dummy app like that to handle far more than that).
Upvotes: 0
Reputation: 45725
The result : between 0,X and 5 seconds to respond. 5 seconds appearing once every 10 requests approximatively.
Based on the above I would attach a profiler (e.g. jvisualvm) and watch the GC cycles, try setting parallel GC, or simply increase the heap size and see if it has an affect, 450m should be enough but the bloat of each request and having many users might cause frequent GC cycles, just a guess, but worth a check
Upvotes: 2