Reputation: 1201
I've set up one nodeJS application which only has one route '/' and I'm using Nginx as a reverse proxy. So app flow is like below:
From the nodeJS, '/' route sends one HTML file as a response to the client. For load testing, I've used the apache benchmark.
Apache benchmark command used for testing:
ab -k -c 250 -n 10000 http://localhost/
Please check the apache benchmark response in the following two cases:
Case 1: When clustering mode is not on. (Without pm2, simple nodeJS server without clustering ex: node index.js)
rails@rails-laptop:~$ ab -k -c 250 -n 10000 http://localhost/
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software: nginx/1.10.3
Server Hostname: localhost
Server Port: 80
Document Path: /
Document Length: 134707 bytes
Concurrency Level: 250
Time taken for tests: 9.531 seconds
Complete requests: 10000
Failed requests: 0
Keep-Alive requests: 10000
Total transferred: 1350590000 bytes
HTML transferred: 1347070000 bytes
Requests per second: 1049.26 [#/sec] (mean)
Time per request: 238.264 [ms] (mean)
Time per request: 0.953 [ms] (mean, across all concurrent requests)
Transfer rate: 138390.37 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.8 0 6
Processing: 38 237 77.6 213 626
Waiting: 31 230 73.8 209 569
Total: 44 237 77.5 213 626
Percentage of the requests served within a certain time (ms)
50% 213
66% 229
75% 247
80% 280
90% 373
95% 395
98% 438
99% 538
100% 626 (longest request)
Case 2: When PM2 clustering mode in ON.(pm2 start index.js -i 4 (4 cluster))
rails@rails-laptop:~$ ab -k -c 250 -n 10000 http://localhost/
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software: nginx/1.10.3
Server Hostname: localhost
Server Port: 80
Document Path: /
Document Length: 134707 bytes
Concurrency Level: 1
Time taken for tests: 14.109 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 1350540000 bytes
HTML transferred: 1347070000 bytes
Requests per second: 708.79 [#/sec] (mean)
Time per request: 1.411 [ms] (mean)
Time per request: 1.411 [ms] (mean, across all concurrent requests)
Transfer rate: 93481.05 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 9
Processing: 1 1 1.2 1 35
Waiting: 0 1 0.9 1 21
Total: 1 1 1.2 1 35
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 1
80% 1
90% 2
95% 3
98% 5
99% 6
100% 35 (longest request)
Now, if you compare the request per second time in both the scenarios, you will see that Requests per second (1049.26 [#/sec] (mean)) when no cluster mode is used higher than the pm2 cluster mode (708.79 [#/sec] (mean)). I don't understand why it so? As far as I know, clustering mode is used to achieve a higher level of concurrency but why there is conflict in the result?
Upvotes: 1
Views: 1769
Reputation: 794
I tried clustering with different parameters:
no process
calculation
for(let i = 1; i <= 50000000; i++){
r += i;
}
sending file
concurrent request count
Here is the git repo
Here is my conclusion:
htop
and I considered that the same amount of the clusters I had CPU's 100 percent busy. The performance multiplied by the cluster count, for example if I made 6 node cluster, the performance became 6 times more.I made a repository and in the readme file I wrote the detailed results.
Upvotes: 2