bushman
bushman

Reputation: 647

Load-testing web-app

When load testing a basic web application, what sanity checks do you do other than expected response time?
Is it fair to ask for peak memory usage?
What other checks do you make?

Upvotes: 6

Views: 542

Answers (4)

aschepis
aschepis

Reputation: 764

There are a number of services online that can do this type of testing for you as well. Of course, one of the downsides to this approach is that its harder to correlate the data from the service (which is what can be observed externally) with your own internal data about disk I/O, DB ops, etc. If you end up going this route I would suggest finding a vendor that will give you programmatic access to the raw test result data.

Upvotes: 1

jqa
jqa

Reputation: 1370

Another good sanity check is to run the tests for at least 24 hours. We do that because one app ran nicely for a few hours then degraded. Discovered some issues with scheduled tasks as well as db connection pooling.

Upvotes: 1

CMerrill
CMerrill

Reputation: 1972

We look at a pretty wide variety of metrics when analyzing the results of a load test.

On the server, we start with these main 4 categories:

  • CPU (% utilization, context switches/sec, process queue length)
  • Memory (% use, page reads/sec, page writes/sec)
  • Bandwidth (incoming, outgoing, send & receive errors, # connections, connection failures, segment retransmits/sec)
  • Disk (Disk I/O Time %, avg service time, queue length, reads and writes/sec)

We also like look at metrics specific to the webserver and application server in use. For example, in IIS we look at IIS connection counts, cache hit rates and turnover frequency, etc. In .NET, we would be looking at ASP.NET Requests/sec, ASP.NET Last Request Execution Time, ASP.NET Current Requests, ASP.NET Queued Requests, ASP.NET Request Wait Time, ASP.NET Errors/sec and many others.

On the client side, we are primarily looking at total load time for the pages, duration and TTFB (time to first byte) for critical transactions, bandwidth usage, average page size and failure rate. We also find two metrics very useful - we call them Waiting Users and Average Wait Time. Not many tools have these - they tell you at each sample period exactly how many simulated users are in the process of retrieving a resource from the server and how long, on average, they have been waiting for the resource to arrive. We find these very useful for

  • determining when the server has reached its capacity
  • discovering that the server has stopped responding to certain types of requests (typically for certain resources, such as those requiring a database query)

Upvotes: 3

Vinko Vrsalovic
Vinko Vrsalovic

Reputation: 340151

On the server

  • Requests per second the application can withstand
  • Requests per second that hit the database (if any, related to the number above, but it's useful to have them as separate figures)
  • Transferred bandwidth (separated by media type, if possible)
  • CPU utilization
  • Memory utilization

On the client

  • Response time
  • Weight of the average page
  • Is the CPU usage high at any time
  • Run something like YSlow to see what can you optimize on the output to make it quick for users

Stress testing tools usually come with most of these measures (except for Memory, CPU and database usage), as do YSlow or Firebug do on the client.

Upvotes: 7

Related Questions