Yusuf K
Yusuf K

Reputation: 13

Node.js server handling multiple requests at same time

I have been trying for hours to get the Node.js server to handle 2 requests parallelly but have no success.

Here is my full code:

var http = require('http');

http.createServer(function (req, res) {
    console.log("log 1");
    handleRequest().then(() => {
        console.log("request handled");
        res.write('Hello World!');
        res.end();
    });
    console.log("log 2");
}).listen(8080);

const handleRequest = () => {
    const p = new Promise((resolve, reject) => {
        setTimeout(() => resolve('hello'), 10000);
    })
    return p;
}

When i run this i immediately open 2 tabs in my browser (Chrome) and watch the logs in the IDE. Here is the logs i'm getting:

log 1 Fri Mar 12 2021 23:27:39 GMT+0300 (GMT+03:00)
log 2 Fri Mar 12 2021 23:27:39 GMT+0300 (GMT+03:00)
request handled
log 1 Fri Mar 12 2021 23:27:49 GMT+0300 (GMT+03:00)
log 2 Fri Mar 12 2021 23:27:49 GMT+0300 (GMT+03:00)
request handled

For individual requests, it seems that my "async" code works as I expected. At first, logs printed, after 10 seconds, request handling completed. But as you can see in the timestamps, despite I am opening 2 tabs just after another (sending two requests same time), they are not handled parallelly. Actually, I was hoping to get logs like this:

log 1 Fri Mar 12 2021 23:27:39 GMT+0300 (GMT+03:00)
log 2 Fri Mar 12 2021 23:27:39 GMT+0300 (GMT+03:00)
log 1 Fri Mar 12 2021 23:27:39 GMT+0300 (GMT+03:00)
log 2 Fri Mar 12 2021 23:27:39 GMT+0300 (GMT+03:00)
request handled
request handled

It seems my second request is not handled until the first one completely done. What I am doing wrong here? Can you please give me some ideas?

Upvotes: 1

Views: 1611

Answers (1)

jfriend00
jfriend00

Reputation: 707158

Some browsers will not send two identical GET requests to the same host at the same time (for caching/efficiency reasons as it waits for the response from the prior request to see if its cacheable). So, if you're trying to bypass this browser sequencing, then you can add a query string with some sort of random or always different value in it (such that the GET requests are not the exact same URL).

http://sample.com/somePath?r=1
http://sample.com/somePath?r=2
http://sample.com/somePath?r=3

To understand the classic example for why this is, imagine a web page that uses a small image for an expando glyph. There are 100 uses of that image in the web page. You do not want the browser making 100 requests to your server for that image.

Instead, you want it to make one request to your server for that image and, wait for the response and if the headers look like they permit caching of that image, then fetch the image from the cache for all 99 other occurrences of that image. To make that work, the browser has to queue up identical requests for the same URL, then when the first one comes back, examine it for caching and then either use the cached result or send the next request.

So, for testing purposes, the way you bypass that browser optimization is to make sure each URL is unique and therefore the browser won't "hold" it in hopes of using a previously cached result.


FYI, you can implement this in a testing environment with either an incrementing counter:

const url = "http://sample.com/somepath";
// counter defined in a scope where it persists 
// from one request to the next
let counter = 0;

fetch(`${url}?r=${++counter}`).then(...).catch(...);

Or with Math.random():

const url = "http://sample.com/somepath";
fetch(`${url}?r=${Math.random()}`).then(...).catch(...);

If the browser code is actually using the fetch() interface, then you can also use the {cache: "no-store"} option as in:

fetch(url, {cache: "no-store"}).then(...).catch(...);

to tell the browser not to consider caching and this will also keep the browser from waiting for prior requests to the same URL to complete.

Upvotes: 1

Related Questions