josephmisiti
josephmisiti

Reputation: 9974

Benchmarking EC2

I am running some quick tests to try to estimate hw costs for a launch and in the future.

Specs

Ubuntu Natty 11.04 64-bit Nginx 0.8.54 m1.large

I feel like I must be doing something wrong here. What I am trying to do estimate how many simultaneous I can support before having to add an extra machine. I am using django app servers but for right now I am just testing nginx server the static index.html page

Results:

$ ab -n 10000 http://ec2-107-20-9-180.compute-1.amazonaws.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking ec2-107-20-9-180.compute-1.amazonaws.com (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        nginx/0.8.54
Server Hostname:        ec2-107-20-9-180.compute-1.amazonaws.com
Server Port:            80

Document Path:          /
Document Length:        151 bytes

Concurrency Level:      1
Time taken for tests:   217.748 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      3620000 bytes
HTML transferred:       1510000 bytes
Requests per second:    45.92 [#/sec] (mean)
Time per request:       21.775 [ms] (mean)
Time per request:       21.775 [ms] (mean, across all concurrent requests)
Transfer rate:          16.24 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        9   11  10.3     10     971
Processing:    10   11   9.7     11     918
Waiting:       10   11   9.7     11     918
Total:         19   22  14.2     21     982

Percentage of the requests served within a certain time (ms)
  50%     21
  66%     21
  75%     22
  80%     22
  90%     22
  95%     23
  98%     25
  99%     35
 100%    982 (longest request)

So before I even add a django backend, the basic nginx setup can only supper 45 req/second? This is horrible for an m1.large ... no?

What am I doing wrong?

Upvotes: 0

Views: 729

Answers (2)

Leopd
Leopd

Reputation: 42757

What Mark said about concurrency. Plus I'd shell out a few bucks for a professional load testing service like loadstorm.com and hit the thing really hard that way. Ramp up load until it breaks. Creating simulated traffic that is at all realistic (which is important to estimating server capacity) is not trivial, and these services help by loading resources and following links and such. You won't get very realistic numbers just loading one static page. Get something like the real app running, and hit it with a whole lot of virtual browsers. You can't count on finding the limits of a well configured server with just one machine generating traffic.

Upvotes: 0

Mark Lavin
Mark Lavin

Reputation: 25164

You've only set the concurrency level to 1. I would recommend upping the concurrency (-c flag for Apache Bench) if you want more realistic results such as ab -c 10 -n 1000 http://ec2-107-20-9-180.compute-1.amazonaws.com/.

Upvotes: 2

Related Questions