Reputation: 1082
I am tuning the performance of Tomcat 7, the server is equipped with 24 cores and 32 GB memory, my testing interface is a RESTful service without any process(response immediately) and the configuration of server.xml is as following...
<Connector port="8088" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"
enableLookups="false"
compression="off"
maxConnections="8192"
maxThreads="1000"
tcpNoDelay="true"/>
and the JVM configuration...
-Xms8192M -Xmx16384M.
The host of JMeter is another computer has same specification with above server.
And the configuration of JMeter heap is -Xms12218m -Xmx24426m
.
My JMeter test plan is 240 requests send to the RESTful interface concurrently for once, but I have noticed that, the average of response time for the first 100 is no more than 50ms but it increases to 1 sec in the next 100 and to 3 sec for the rest.
I am curious about such phenomenon, is there any mistakes in the configurations or any suggestions?
Thanks in advance.
Upvotes: 9
Views: 24027
Reputation: 86
You can config:
acceptCount="2048"
and
maxConnections="1024"
The maxConnections has relationship with maxThreads, and you should configure the maxThreads match your business and CPU's core number, such as 8X, or 16X. the acceptCount is the waiting connection number.
Note that the maxConnections and maxThreads is not the bigger the better, with the performance of your Server Hardware.
Upvotes: 7
Reputation: 1452
The more requests your server has to service, the longer it takes to service each request. This is normal behaviour.
How are you starting your threads concurrently? Ramp time = 0 or 1?
When you start firing a load of threads, your client takes longer to make requests, and your server takes longer to respond.
At startup, the server is able to response pretty quickly to all requests, as it has nothing else to do, until it reaches a threshold. Each of those requests will finish quickly and the same thread will send another request. Meanwhile, the server is responding to the previous wave of threads, while more are getting queued. Now it has to manage queues while still responding to requests, so another threshold is met.
Basically, starting a bunch of threads and firing requests concurrently is not a very realistic use case for a server, except in a few cases. When it is relevant, you can expect this behaviour.
Upvotes: 2