Reputation: 31
For a rest api application in java, we are planning to perform a load test. But the initial results are a bit confusing. Post development of script using jmeter. 1. we execute the script for 1 vuser, 2 vusers, 5vuser, 10 vusers & 25vusers 2. Each test is executed for 30 minutes duration with nearly 5 sec rampup. 3. Each request has a random think time from 2 sec to 3 sec.
When this test is executed we see that for for few apis the 95%ile response time for 2, 5, 10 vuser is way less than 1 vuser. But same test post restart of tomcat gives different results
I am confused as to how the response time is decreasing as vusers are increasing.
Response time graphs, when tomcat instance is not restarted : https://i.sstatic.net/BaBYl.jpg
Response time graphs, when tomcat instance is restarted : https://i.sstatic.net/J2HpI.jpg
Upvotes: 0
Views: 277
Reputation: 34516
Check that your API does not return 200 for invalid responses at scale.
Use ResponseAssertion for that.
Upvotes: 0
Reputation: 168002
There is one Java runtime feature: Just-in-time compilation, the Java bytecode gets translated into the native code after ~1500 invocations (default value), controllable via -XX:CompileThreshold property.
That could be the explanation for the situation you're facing: Java runtime optimizes the functions according to their usage hence function execution time might decrease if you repeatedly call it.
Also don't expect that response time for 2 virtual users will be 2x times higher than for 1 virtual user. The application might scale up to certain extent and when you increase the load the throughput will increase and the response time will remain the same.
At some point response time will start growing and throughput will go down and this is known as performance bottleneck, however the chance you will hit application limits with 25 users is minimal given current modern hardware.
So consider applying the following performance testing types:
More information: Why ‘Normal’ Load Testing Isn’t Enough
Upvotes: 1