Reputation: 111
So first things first I have a gut feeling that this is an utterly fullish question but anyway hear me out. I was thinking if Node Js is a single threaded application then can we run multiple instances of it on different ports on the same machine. Suppose I have an 8 thread processors does this mean I can have 8 node instances running without a performance hit. Provided I am installing enough ram and then I can have a load balancing for these 8 instances
Upvotes: 8
Views: 7799
Reputation: 1925
In addition to the great answer provided by @jmrk, I'd like to comment on your "different ports" portion of your question.
One of the greatest things about nodejs when it comes to multiprocess is the Cluster Module. You can run 1 process and fork it multiple times and you can make them all listen to the same port. So you would not have to manage all the ports with Nginx, for example.
And if you want to deploy to a cluster of computers that might have a different number of cores you can do that dynamically by bringing the OS Module to the game.
const cluster = require('cluster')
if (cluster.isMaster) {
// Creates the Forks
process.title = 'my-node-app-master'
const { length: numberOfProcs } = require('os').cpus()
for (let i = 0; i < numberOfProcs; i++) {
cluster.fork()
}
// and here you can fork again when one of the forks dies
cluster.on('exit', (worker, code, signal) => {
console.error(`worker ${worker.process.pid} died (${signal || code}). restarting it in a sec`)
setTimeout(() => cluster.fork(), 1000)
})
} else {
// run your server
const http = require('http')
process.title = 'my-node-app-fork'
http.Server((req, res) => {
res.writeHead(200)
res.end(`hello world from pid ${process.pid}\n`)
}).listen(8000)
}
The process.title
will be helpful to inspect the process. I ran that code on my machine I got this:
$ ps aux | grep node-app
daniel 8062 1.1 0.1 550008 31380 pts/1 Sl+ 12:27 0:00 my-node-app-master
daniel 8069 0.5 0.1 549168 30476 pts/1 Sl+ 12:27 0:00 my-node-app-fork
daniel 8070 0.3 0.1 549176 30644 pts/1 Sl+ 12:27 0:00 my-node-app-fork
daniel 8077 0.5 0.1 549168 30376 pts/1 Sl+ 12:27 0:00 my-node-app-fork
...
daniel 8157 0.3 0.1 549168 30668 pts/1 Sl+ 12:27 0:00 my-node-app-fork
daniel 8194 0.0 0.0 9028 2468 pts/2 R+ 12:28 0:00 grep --color=auto node-app
Then a few requests:
$ curl http://localhost:8000
hello world from pid 8069
$ curl http://localhost:8000
hello world from pid 8070
$ curl http://localhost:8000
hello world from pid 8077
And when you kill one of the child processes
$ kill -9 8077
The logs will show
worker 8077 died (SIGKILL). restarting it in a sec
I know it is not directly related to your main question but it is still related and I think one can make good use of this.
Upvotes: 9
Reputation: 40561
(V8 developer here.)
Yes, in general, running several instances of Node on the same machine can increase the total amount of work done. This would be similar to having several Chrome tabs, which can each do some single-threaded JavaScript work.
That said, it's most probably not as simple as "8 instances on an 8-thread processor gives 8 times the overall throughput", for several reasons:
(1) If you actually mean "8 threads", i.e. 4 cores + hyperthreading, then going from 4 to 8 processes will likely give 20-40% improvement (depending on hardware architecture and specific workload), not 2x.
(2) V8 does use more than one thread for internal purposes (mostly compilation and garbage collection), which is one reason why a single Node instance likely (depending on workload) will use more than one CPU core/thread.
(3) Another reason is that while JavaScript is single-threaded, Node does more than just execute a single thread of JavaScript. The various things happening in the background (that will trigger JS callbacks when they're ready) also need CPU resources.
(4) Finally, the CPU is not necessarily your bottleneck. If your server's performance is capped by e.g. network or disk, then spawning more instances won't help; on the contrary, it might make things significantly worse.
Long story short: it doesn't hurt to try. As a first step, run a typical workload on one instance, and take a look at the current system load (CPU, memory, network, disk). If they all have sufficient idle capacity, try going to two instances, measure whether that increases overall throughput, and check system load again. Then keep adding instances until you notice that it doesn't help any more.
Upvotes: 20