Bob
Bob

Reputation: 1607

CPU-bound process blocks worker pool while using Child Process in NestJS HTTP server

Node version: v10.13.0

I'm trying a very simple test on NodeJS request concurrency involving heavy CPU-calculation. I understand NodeJS is not the best tool for CPU-bound processes, and that a child process should not be spawned systematically, but this code is for the sake of testing how the child process works. Also this is written in TypeScript, using NestJS.

src/app.controller.ts

import { Get, Param, Controller } from '@nestjs/common';
import fork = require('child_process');

@Controller()
export class AppController {
  @Get()
  async root(): Promise<string> {
    let promise = new Promise<string>(
        (resolve, reject) => {
          // spawn new child process
          const process = fork.fork('./src/cpu-intensive.ts');
          process.on('message', (message) => {
            // when process finished, resolve
            resolve( message.result);
          });
          process.send({});    
        }
    );    
    return await promise;
  }
}

src/cpu-intensive.ts

process.on('message', async (message) => {
  // simulates a 10s-long process
  let now = new Date().getTime();
  let waittime = 10000; // 10 seconds
  while (new Date().getTime() < now + waittime) { /* do nothing */ };
  // send response to master process
  process.send({ result: 'Process ended' });
});

Such a long process, if executed without spawning new child processes, leads to this timeline of results, with 5 concurrent requests (noted from #1 to #5). Each process blocking the loop-event, each request has to wait for the previous ones to complete to be answered.

Time 0    10   20   30   40   50
#1   +----+
#2   +----+----+
#3   +----+----+----+
#4   +----+----+----+----+
#5   +----+----+----+----+----+

While spawning new child processes, I was expecting each process would be handled concurrently by a different logical core on my CPU (mine has 8 logical cores), leading to this predicted timeline:

Time 0    10   20   30   40   50
#1   +----+
#2   +----+
#3   +----+
#4   +----+
#5   +----+

Though, I observe this strange result on each test:

Time 0    10   20   30   40   50
#1   +----+
#2   +----+----+
#3   +----+----+----+
#4   +----+----+----++
#5   +----+----+----+-+

The first 3 requests act as if the worker pool was starved, though I'd assume that 3 different pools would have been created. The 2 last requests are very confusing, as they act like working concurrently with request #3.

I'm currently looking for an explanation for:


Please note that if I add another 'fast' method as follows:

  @Get('fast')
  async fast(): Promise<string> {
    return 'Fast process ended.';
  }

this method is not impacted by the CPU-intensive processes run in concurrency, and replies always instantly.

Upvotes: 15

Views: 3607

Answers (1)

Aabid
Aabid

Reputation: 941

I performed test case on my machine and its working fine can you check that on your machine.

Node Version: v8.11.2 OS: macOs High Sierra 10.13.4, 8 Cores

child-process-test.js

const child_process = require('child_process');  
for(let i=0; i<8; i++) {  
    console.log('Start Child Process:',i,(new Date()));
    let worker_process = child_process.fork("cpu-intensive-child.js", [i]);    
    worker_process.on('close', function (code) {  
        console.log('End Child Process:', i , (new Date()), code);  
    });
}

cpu-intensive-child.js

const fs = require('fs');
// simulates a 10s-long process
let now = new Date().getTime();
let waittime = 10000; // 10 seconds
while (new Date().getTime() < now + waittime) { /* do nothing */ };
// send response to master process
// process.send({ result: 'Process ended' });

Output

enter image description here

You can check in output the difference is only 10 sec for all the process, you can perform this test case on you machine and let me know, may be it can help.

Upvotes: 0

Related Questions