Aliance
Aliance

Reputation: 867

Docker application deployment

My web application consists from 3 docker containers: app (main container with code), redis and node. I have the deployment shell script which do the following things:

  1. clones master from git (git clone <...> $REVISION)
  2. removes all files from document root directory (rm -rf $PROJECT_DIR)
  3. move everything cloned into document root (mv $REVISION $PROJECT_DIR)
  4. stop all running containers: (docker-compose stop)
  5. remove all stopped containers (docker-compose rm -f)
  6. build containers (docker-compose build)
  7. run all built containers (docker-compose up -d)
  8. run all init and start scripts inside the containers via docker exec (for example: config compilers, nginx reload)

And this works fine for me, but I have several doubts in this scheme:

  1. In step 6, if I don't change files into node container, it will use already built image - it is fast. But if I change something, the container will build again - it is slow and increases unused images
  2. In the worst case (when I made changes into node code) deployment lasts maybe about 2-3 minutes, in the best case - about 30 seconds. But even then it's a downtime for some users.

As I think, I need the availability to build the new container (in parralel of old container continue working), and only after a successfull status - change the tag of latest container, which is used by the app. How can I do this?

Will be very thankful for your comments.

Upvotes: 1

Views: 333

Answers (1)

L0j1k
L0j1k

Reputation: 12625

What I do is tag all my images by version in addition to tagging then "latest". So I have one image with multiple tags. Just tag it with more than one. When you tag by version, it lets you move around the "latest" tag without problems:

docker build -t=myApp .
docker tag myApp:latest myApp:0.8.1

Now when you docker images you'll see the same image listed twice, just with different tags (both "latest" and "0.8.1"). So when you go to build something like you mention:

# the original container is still running while this builds ...
docker build -t=myApp .
# now tag "latest" to the newest version
docker tag myApp:latest myApp:0.8.2
# and now you can just stop and restart the container ...
docker rename myApp myApp-old
docker run -d --name=myApp -p 80:80 myApp:latest

This is something you could do, but it looks like you're really needing a way to swap containers without any downtime. Zero-downtime container changes.

There is a process I have used for a couple of years now of using an Nginx reverse proxy for your Docker containers. Jason Wilder details in this blog post the process for doing so.

I'll give you an overview of what this will do for you. The jwilder/nginx-proxy docker image will serve as a reverse proxy for your containers, and by default it round-robin load-balances inbound connections to containers based on the hostname. After you build and run a container with the same VIRTUAL_HOST environment variable, nginx-proxy automatically round-robin load-balances the two containers. This way, you can start the new container, and it will begin servicing requests. Then you can just bring down your other, old container. Zero-downtime updates.

Just some details: The nginx-proxy image uses Jason Wilder's docker-gen utility to automatically grab the docker container information and then route requests to each. What this means is that you start your normal containers with a new environment variable (VIRTUAL_HOST) and nginx-proxy will automatically begin routing inbound requests to the container. This is best used to "share" a port (e.g. tcp/80) among many containers. Also this reverse proxy means it can handle HTTPS as well as HTTP authentication, so that you don't have to handle it inside your web containers. The backend is unencrypted (HTTP) but since it's on the same host, no problem.

Upvotes: 2

Related Questions