Reputation: 7789
I encountered something weird when deploying a new service with supervisord. These are the relevant parts:
# supervisord.conf
[program:express]
command=yarn re-express-start
# package.json
{
"scripts": {
"re-express-start": "node lib/js/client/Express.bs.js",
}
}
When I run supervisorctl start
, the node server is started as expected. But after I run supervisorctl stop
, the server keeps on running even though supervisor thinks it's been killed.
If I change the supervisord.conf
file to execute node lib/js/client/Express.bs.js
directly (without going through yarn
), then this works as expected. But I want to go through the package.json
-defined script.
I looked into how the process tree looks like and but I don't quite understand why. Below are the processes before and after stopping the supervisord-managed service.
$ ps aux | grep node
user 12785 1.4 3.5 846404 72912 ? Sl 16:30 0:00 node /usr/bin/yarn re-express-start
user 12796 0.0 0.0 4516 708 ? S 16:30 0:00 /bin/sh -c node lib/js/client/Express.bs.js
user 12797 5.2 2.7 697648 56384 ? Sl 16:30 0:00 /usr/bin/node lib/js/client/Express.bs.js
root 12830 0.0 0.0 14216 1004 pts/1 S+ 16:30 0:00 grep --color=auto node
$ pstree -c -l -p -s 12785
systemd(1)───supervisord(7153)───node(12785)─┬─sh(12796)───node(12797)─┬─{node}(12798)
│ └─{node}(12807)
├─{node}(12786)
└─{node}(12795)
$ supervisorctl stop express
$ ps aux | grep node
user 12797 0.7 2.7 697648 56384 ? Sl 16:30 0:00 /usr/bin/node lib/js/client/Express.bs.js
root 12975 0.0 0.0 14216 980 pts/1 S+ 16:32 0:00 grep --color=auto node
$ pstree -c -l -p -s 12797
systemd(1)───node(12797)─┬─{node}(12798)
└─{node}(12807)
$ kill 12797
$ ps aux | grep node
root 13426 0.0 0.0 14216 976 pts/1 S+ 16:37 0:00 grep --color=auto node
From the above, the "actual" workload process doing the server stuff has PID 12797
. It is spawned by the supervisor process and nested under a few more.
Stopping the supervisor process stops the processes with PIDs 12785
and 12796
, but not the 12797
which is actually reattached to the init process.
Any ideas on what is happening here? Is this due to something ignoring some SIGxxx signals? I assume it's the yarn
invocation somehow eating those,
but I don't know how and how to reconfigure.
Upvotes: 3
Views: 2957
Reputation: 7580
I ran into this issue as well when I was running a node Express app. The problem seemed to be that I was having supervisor call npm start
which refers to the package.json
start
script. That script simply calls node app.js
. The solution seemed to be to directly call that command from the supervisor config file like so:
[program:node]
...
command=node app.js
...
stopasgroup=true
stopsignal=QUIT
In addition, I added stopasgroup
and changed the stopsignal
to QUIT. The stopsignal seemed to be required in order to properly kill the process.
I can now freely call supervisorctl restart node:node_00
without having any ERROR (spawn error)
errors.
Upvotes: 9