Reputation: 405
I am currently deeply learning Nodejs platform. As we know, Nodejs is single-threaded, and if it executes blocking operation (for example fs.readFileSync), a thread should wait to finish that operation. I decided to make an experiment: I created a server that responses with the huge amount of data from a file on each request
const { createServer } = require('http');
const fs = require('fs');
const server = createServer();
server.on('request', (req, res) => {
let data;
data =fs.readFileSync('./big.file');
res.end(data);
});
server.listen(8000);
Also, I launched 5 terminals in order to do parallel requests to a server. I waited to see that while one request is being handled, the others should wait for finishing blocking operation from the first request. However, the other 4 requests were responded concurrently. Why does this behavior occur?
Upvotes: 1
Views: 1653
Reputation: 2133
Maybe is not related directly to your question but i think this is useful,
You can use a stream instead of reading the full file into memory, for example:
const { createServer } = require('http');
const fs = require('fs');
const server = createServer();
server.on('request', (req, res) => {
const readStream = fs.createReadStream('./big.file'); // Here we create the stream.
readStream.pipe(res); // Here we pipe the readable stream to the res writeable stream.
});
server.listen(8000);
The point of doing this is:
This works better because is non blocking, and the res
object is already a stream, and this means the data will be transfered in chunks.
Ok so streams = chunked
Why not read chunks from the file and send them in real time instead of reading a really big file and divide that in chunks after?
Also why is really important on a real production server?
Because every time a request is received, your code is going to add that big file into ram, to that add this is concurrent so you are expecting to serve multiple files at the same time, so let's do the most advanced math my poor education allows:
1 request for a 1gb file = 1gb in ram
2 requests for a 1gb file = 2gb in ram
etc
That clearly doesn't scale nicely right?
Streams allows to decouple that data from the current state of the function (inside that scope), so in simple terms its going to be (with the default chunk
size of 16kb):
1 request for 1gb file = 16kb in ram
2 requests for 1gb file = 32kb in ram
etc
And also, the OS its already passing a stream to node (fs) so it works with streams end to end.
Hope it helps :D.
PD: Never use sync operations (blocking) inside async operations (non blocking).
Upvotes: 3
Reputation: 708096
What you're likely seeing is either some asynchronous part of the implementation inside of res.end()
to actually send your large amount of data or you are seeing all the data get sent very quickly and serially, but the clients can't process it fast enough to actually show it serially and because the clients are each in their own separate process, they "appear" to show it arriving concurrently just because they're too slow reacting to show the actually arrival sequence.
One would have to use a network sniffer to see which of these is actually occurring or run some different tests or put some logging inside the implementation of res.end()
or tap into some logging inside the client's TCP stack to determine the actual order of packet arrival among the different requests.
If you have one server and it has one request handler that is doing synchronous I/O, then you will not get multiple requests processes concurrently. If you believe that is happening, then you will have to document exactly how you measured that or concluded that (so we can help you clear up your misunderstanding) because that is not how node.js works when using blocking, synchronous I/O such as fs.readFileSync()
.
node.js runs your JS as single threaded and when you use blocking, synchronous I/O, it blocks that one single thread of Javascript. That's why you should never use synchronous I/O in a server, except perhaps in startup code that only runs once during startup.
What is clear is that fs.readFileSync('./big.file')
is synchronous so your second request will not get started processing until the first fs.readFileSync()
is done. And, calling it on the same file over and over again will be very fast (OS disk caching).
But, res.end(data)
is non-blocking, asynchronous. res
is a stream and you're giving the stream some data to process. It will send out as much as it can over the socket, but if it gets flow controlled by TCP, it will pause until there's more room to send on the socket. How much that happens depends upon all sorts of things about your computer, it's configuration and the network link to the client.
So, what could be happening is this sequence of events:
First request arrives and does fs.readFileSync()
and calls res.end(data)
. That starts sending data to the client, but returns before it is done because of TCP flow control. This sends node.js back to its event loop.
Second request arrives and does fs.readFileSync()
and calls res.end(data)
. That starts sending data to the client, but returns before it is done because of TCP flow control. This sends node.js back to its event loop.
At this point, the event loop might start processing the third or fourth requests or it might service some more events (from inside the implementation of res.end()
or the writeStream from the first request to keep sending more data. If it does service those events, it could give the appearance (from the client point of view) of true concurrency of the different requests).
Also, the client could be causing it to appear sequenced. Each client is reading a different buffered socket and if they are all in different terminals, then they are multi-tasked. So, if there is more data on each client's socket than it can read and display immediately (which is probably the case), then each client will read some, display some, read some more, display some more, etc... If the delay between sending each client's response on your server is smaller than the delay in reading and displaying on the client, then the clients (which are each in their own separate processes) are able to run concurrently.
When you are using asynchronous I/O such as fs.readFile()
, then properly written node.js Javascript code can have many requests "in flight" at the same time. They don't actually run concurrently at exactly the same time, but one can run, do some work, launch an asynchronous operation, then give way to let another request run. With properly written asynchronous I/O, there can be an appearance from the outside world of concurrent processing, even though it's more akin to sharing of the single thread whenever a request handler is waiting for an asynchronous I/O request to finish. But, the server code you show is not this cooperative, asynchronous I/O.
Upvotes: 3