Reputation: 2124
I'm writing a server application, which should serve a lot of request. I tried to do some testing and found that I have some limit for the server throughput. My current guess is that bottleneck in the TCP handling.
My question is: how can I confirm or disprove my guess? Which metrics I should look at and what values can be considered as a clue? I would also appreciate any advise about a tool to use.
Server's OS is Linux. My application is written on Java. Feel free to ask for more info in comments.
PS I'm not really sure where this question should be posted. May be it should be moved to serverfault?
UPD: It's a http service, with current throughput around 450 req/sec and average response size around 20 KB. Note that it is doing 6-8 requests to mongodb and one to memcached for each client request.
UPD2: I forgot a very important point: network interface is underutilized, it is only 80-100Mb used out of 1Gb. The CPU and memory on both app server and DB is not loaded too.
Upvotes: 2
Views: 532
Reputation: 600
If you haven't already, I would recommend implementing some logging on the server application. The app should print at least these stats:
That would help you determine whether the bottleneck is TCP overhead, or your application. If you want a quick and dirty view, you can use WireShark to see the time between the last request packet coming in and the first response packet going out for a particular transaction. However, it will be difficult to manually measure many transactions with WireShark, and having good logging in place will probably help you out in the long run anyway.
Good luck!
Upvotes: 3