JHH
JHH

Reputation: 9295

Using timeout with docker run from within script

In my Travis CI, part of my verification is to start a docker container and verify that it doesn't fail within 10 seconds. I have a yarn script docker:run:local that calls docker run -it <mytag> node app.js.

If I call the yarn script with timeout from a bash shell, it works fine:

$ timeout 10 yarn docker:run:local; test $? -eq 124 && echo "Container ran for 10 seconds without error"

This calls docker run, lets it run for 10 seconds, then kills it (if not already returned). If the exit code is 124, the timeout did expire, which means the container was still running. Exactly what I need to verify that my docker container is reasonably sane.

However, as soon as I run this same command from within a script, either in a test.sh file called from the shell, or if putting it in another yarn script and calling yarn test:docker, the behaviour is completely different. I get:

ERRO[0000] error waiting for container: context canceled

Then the command hangs forever, there's no 10 second timeout, I have to ctrl-Z it and then kill -9 the process. If I run top I now have a docker process using all my CPU forever. If using timeout with any other command like sleep 20 && echo "Finished sleeping", this does not happen, so I suspect it may have something to do with how docker works in interactive mode or something, but that's only my guess.

What's causing timeout docker:run to fail from a script but work fine from a shell and how do I make this work?

Upvotes: 0

Views: 1180

Answers (1)

Raman Sailopal
Raman Sailopal

Reputation: 12877

Looks like running docker in interactive mode is causing the issue.

Run docker in detached more by removing the -it and allowing it to run in default detached mode or specify -d instead of -it and so:

docker run -d <mytag> node

or

docker run <mytag> node

Upvotes: 1

Related Questions