Reputation: 827
I have a docker image that I can run in many servers with different parameters. Into the docker image, there is a git repository that needs to be pulled any how. So I need something that:
some questions:
is this procedure correct or there are other ways?
is there any way to do steps 1,2 passing some arguments to the run
command, like a bash script or something else?
when I do docker pull <new image>
, do I need to turn the already running docker's container off and restart it after the pull operation ends or docker is so smart to understand that it needs to restart the container?
I found watchtower that can handle the container's update, also remotely. I didn't try it yet, but I will.
EDIT: I have created 2 scripts. The first inside the docker image that performs the git pull. The second outside the docker image that will be started from a user or an automatic program. This second script does:
run the docker into detached mode, getting the container id that returns from the run command
exec the first script using the docker exec
command
commit the container using the previous saved container id
push the new image into the cloud registry
stop the container
Now I need to try the watchtower
program or find another tools.
Upvotes: 0
Views: 3296
Reputation: 158908
To get the net effect of this, you should:
Write a Dockerfile
that does the work of installing your application in a pristine Docker container (running docker build
will make an image out of it)
Check this Dockerfile
into your git repository alongside your source code
Set up some CI system to rebuild the Docker container on every change and tag it with some unique tag (a timestamp, the git commit hash, a relevant git tag) and push it to a repository
On the systems where the containers are running, docker stop && docker rm
them, then docker run
them with the new tagged image
This approach has two important advantages over what you describe. The first is that anybody who has the source repository can rebuild exactly the running image. (In your approach if you accidentally lose a running container you can't reproduce what was running.) The second is that, if a build goes wrong, it's easy enough to roll back to running the previous version of the image just by changing the tag back.
In particular, if you're asking "can I run something like a bash script, with docker run
, so that I can docker commit
the result", a Dockerfile
is almost exactly what you're looking for.
The last step is the least well-defined of these. You can use a simple cluster manager tool like Ansible to cause containers to be running in places; or update an image version in something like a Docker Compose YAML file running on Docker Swarm; or the watchtower tool you identified it looks like could do it. This is something that Kubernetes does extremely well, but it's...an investment.
In the workflow you describe, there are a couple of things I would say are distinctly not best practices in production environments. I'd suggest you should basically never use docker commit
(docker build
is quite straightforward and gives you reproducible image builds; even in the context of an SO question "here's my Dockerfile" is much easier to describe than "I did a bunch of stuff in a container and then committed it"). docker exec
is useful for debugging but shouldn't be the principal way you interact with containers. Finally, using the same image name/tag and committing different images under that same tag makes it difficult to roll back to an older version of the code ("don't use the :latest
tag").
Upvotes: 1