Prakhar
Prakhar

Reputation: 15

How node handles multiple request at the same time?

Suppose I Have a node app. I am having an API(/listen). Now in the callback for this API, I have code which does some file operation and takes around 1 minute. Now, client1 hits this /listen API and node is busy in executing callback(what I know is node will assign the file operation to os kernel and node will be idle ). In the meantime client2 also hits the same API and node starts the file operation for the second client2. Assume the file operation for client2 finishes before client1. How will node come to know which response to send to which client?

Second question if node is performing some big calculations and its thread is busy in doing so, will it accept any incoming request at that time. What will happen if multiple users hit the same API at the same time?

Thanks in advance

Upvotes: 1

Views: 414

Answers (1)

manikawnth
manikawnth

Reputation: 3249

Node.js runs an event loop under the hood.
That means whatever it does viz. polling linux sockets for incoming connections, after connections for incoming data, executing callbacks on occurence of an event everything happens on a single threaded loop
Ref: https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/

In your example, the responding back to the client is not done by node. It uses the underlying libuv eventloop framework to handle these (http://docs.libuv.org/en/v1.x/design.html)

How does libuv perform this?
Since libuv is cross-platform non-blocking event driven IO, it uses OS provided polling mechanisms (epoll in linux, kqueue in macOS, iocp in windows). From libuv docs:

The library provides much more than a simple abstraction over different I/O polling mechanisms: ‘handles’ and ‘streams’ provide a high level abstraction for sockets and other entities; cross-platform file I/O and threading functionality is also provided, amongst other things.

Let's take linux as an example Every time there's a new client connection, kernel creates a new file descriptor for each client whenever the connection is accepted by the server (in our case libuv) and libuv abstracts it to a higher-level connection handle (unique combination like client ip+port). So basically node executes the call back and then invokes libuv API which write to the file descriptor that linux provided for that client (separate for client1 and client2)

An example TCP server written in scratch using linux epoll:
http://swingseagull.github.io/2016/11/08/epoll-sample/

libuv basically performs the same above thing providing same API across the OS platforms.

However File I/O is a different animal. Since there's no clean cross platform file i/o polling mechanism, libuv resorts to a thread pool mechanism. It opens some threads in a pool and multiplexes the workload across

From libuv docs again:

Unlike network I/O, there are no platform-specific file I/O primitives libuv could rely on, so the current approach is to run blocking file I/O operations in a thread pool.

More detail:
http://docs.libuv.org/en/v1.x/tcp.html (uv_tcp_t)
The above API is used for all tcp operations like tcp server binding, connecting to client, shutting down the socket

That handle is a subclass of streams handle (uv_stream_t)
http://docs.libuv.org/en/v1.x/stream.html

Basically all the read/write operations via file descriptors (sockets, pipes, tty etc) are handled by above API

To your second question: Node will not immediately get the second request, if it's busy processing. However the request is written to the file descriptor and the data is buffered in kernel and when Node via libuv polls for more data on every socket (in its event loop phase), it fetches the data from the kernel over that descriptor and only when the corresponding callback is invoked, the data is available to the node.js runtime

From node.js design docs:

Since any of these operations may schedule more operations and new events processed in the poll phase are queued by the kernel, poll events can be queued while polling events are being processed. As a result, long running callbacks can allow the poll phase to run much longer than a timer's threshold. See the timers and poll sections for more details.

Upvotes: 1

Related Questions