Reputation: 6792
I have to setup "dockerized" environments (integration, qa and production) on the same server (client's requirement). Each environment will be composed as follow:
Over them, jenkins will handle the deployment based on CI.
Using set of containers per environment sounds like the best approach.
But now I need, process manager to run and supervise all of them:
Supervisord seem to be the best choice, but during my tests, i'm not able to "properly" restart a container. Here a snippet of the supervisord.conf
[program:docker-rabbit]
command=/usr/bin/docker run -p 5672:5672 -p 15672:15672 tutum/rabbitmq
startsecs=20
autorestart=unexpected
exitcodes=0,1
stopsignal=KILL
So I wonder what is the best way to separate each environment and be able to manage and supervise each service (a container).
[EDIT My solution inspired by Thomas response]
each container is run by a .sh script that looking like
rabbit-integration.py
#!/bin/bash
#set -x
SERVICE="rabbitmq"
SH_S = "/path/to_shs"
export MY_ENV="integration"
. $SH_S/env_.sh
. $SH_S/utils.sh
SERVICE_ENV=$SERVICE-$MY_ENV
ID_FILE=/tmp/$SERVICE_ENV.name # pid file
trap stop SIGHUP SIGINT SIGTERM # trap signal for calling the stop function
run_rabbitmq
$SH_S/env_.sh is looking like:
# set env variable
...
case $MONARCH_ENV in
$INTEGRATION)
AMQP_PORT="5672"
AMQP_IP="172.17.42.1"
...
;;
$PREPRODUCTION)
AMQP_PORT="5673"
AMQP_IP="172.17.42.1"
...
;;
$PRODUCTION)
AMQP_PORT="5674"
REDIS_IP="172.17.42.1"
...
esac
$SH_S/utils.sh is looking like:
#!/bin/bash
function random_name(){
echo "$SERVICE_ENV-$(cat /proc/sys/kernel/random/uuid)"
}
function stop (){
echo "stopping docker container..."
/usr/bin/docker stop `cat $ID_FILE`
}
function run_rabbitmq (){
# do no daemonize and use stdout
NAME="$(random_name)"
echo $NAME > $ID_FILE
/usr/bin/docker run -i --name "$NAME" -p $AMQP_IP:$AMQP_PORT:5672 -p $AMQP_ADMIN_PORT:15672 -e RABBITMQ_PASS="$AMQP_PASSWORD" myimage-rabbitmq &
PID=$!
wait $PID
}
At least myconfig.intergration.conf is looking like:
[program:rabbit-integration]
command=/path/sh_s/rabbit-integration.sh
startsecs=20
priority=90
autorestart=unexpected
exitcodes=0,1
stopsignal=TERM
In the case i want use the same container the startup function is looking like:
function _run_my_container () {
NAME="my_container"
/usr/bin/docker start -i $NAME &
PID=$!
wait $PID
rc=$?
if [[ $rc != 0 ]]; then
_run_my_container
fi
}
where
function _run_my_container (){
/usr/bin/docker run -p{} -v{} --name "$NAME" myimage &
PID=$!
wait $PID
}
Upvotes: 11
Views: 13271
Reputation: 101
You need to make sure you use stopsignal=INT in your supervisor config, then exec docker run
normally.
[program:foo]
stopsignal=INT
command=docker -rm run whatever
At least this seems to work for me with docker version 1.9.1.
If you run docker from inside a shell script, it is very important that you have exec
in front of the docker run command, so that docker run
replaces the shell process and thus receives the SIGINT directly from supervisord.
Upvotes: 10
Reputation: 1651
I found that executing docker run
via supervisor actually works just fine, with a few precautions. The main thing one needs to avoid is allowing supervisord to send a SIGKILL
to the docker run
process, which will kill off that process but not the container itself.
For the most part, this can be handled by following the instructions in Why Your Dockerized Application Isn’t Receiving Signals. In short, one needs to:
CMD ["/path/to/myapp"]
form (same for ENTRYPOINT
) instead of the shell form (CMD /path/to/myapp
).--init
to docker run
.ENTRYPOINT
, ensure its last line calls exec
, so as to avoid spawning a new process.STOPSIGNAL
to your Dockerfile
.Additionally, you'll want to make sure that your stopwaitsecs
setting in supervisor is greater than the time your process might take to shutdown gracefully when it receives a SIGTERM
(e.g., graceful_timeout
if using gunicorn).
Here's a sample config to run a gunicorn container:
[program:gunicorn]
command=/usr/bin/docker run --init --rm -i -p 8000:8000 gunicorn
redirect_stderr=true
stopwaitsecs=31
Upvotes: 1
Reputation: 38949
You can have Docker just not detach and then things work fine. We manage our Docker containers in this way through supervisor. Docker compose is great, but if you're already using Supervisor to manage non-docker things as well, it's nice to keep using it to have all your management in one place. We'll wrap our docker run in a bash script like the following and have supervisor track that, and everything works fine:
#!/bin/bash¬
TO_STOP=docker ps | grep $SERVICE_NAME | awk '{ print $1 }'¬
if [$TO_STOP != '']; then¬
docker stop $SERVICE_NAME¬
fi¬
TO_REMOVE=docker ps -a | grep $SERVICE_NAME | awk '{ print $1 }'¬
if [$TO_REMOVE != '']; then¬
docker rm $SERVICE_NAME¬
fi¬
¬
docker run -a stdout -a stderr --name="$SERVICE_NAME" \
--rm $DOCKER_IMAGE:$DOCKER_TAG
Upvotes: 4
Reputation: 55197
Supervisor requires that the processes it manages do not daemonize, as per its documentation:
Programs meant to be run under supervisor should not daemonize themselves. Instead, they should run in the foreground. They should not detach from the terminal from which they are started.
This is largely incompatible with Docker, where the containers are subprocesses of the Docker process itself (i.e. and hence are not subprocesses of Supervisor).
To be able to use Docker with Supervisor, you could write an equivalent of the pidproxy
program that works with Docker.
But really, the two tools aren't really architected to work together, so you should consider changing one or the other:
Upvotes: 15