Adam
Adam

Reputation: 15

Better Docker Compose production setting?

I have really simple web application contains those containers:

Every container has own Dockerfile and I can run it with Docker Compose together. It's really nice and I like the simplicity.

There is deploy script on my server. It clones GIT monorepo and run docker-compose:

DIR=$(dirname $(readlink -f $0))
rm -rf $DIR/app
git clone [email protected]:adam/myproject.git $DIR/app
cd $DIR/app && \
   docker-compose down --remove-orphans && \
   docker-compose up --build -d

But this solution is really slow and it makes ~3 minutes downtime. For this project I can accept few seconds downtime, it's not fatal. I don't need really zero downtime. But 3 minutes is not acceptable.

The most time-consuming is "npm build" inside containers. It's something which it must be run after every change.

What I can do better? Is Swarm or Kubernetes really only solution? Can I build containers while the old app still running? And after build just stop old and run new?

Thanks!

Upvotes: 0

Views: 421

Answers (4)

David Maze
David Maze

Reputation: 158714

If you can structure things so that your images are self-contained, then you can get a fairly short downtime.

I would recommend using a unique tag for your images. A date stamp works well; you mention you have a monorepo, so you can use the commit ID in that repo for your image tag too. In your docker-compose.yml file, use an environment variable for your image names

version: '3'
services:
  frontend:
    image: myname/frontend:${TAG:-latest}
    ports: [...]
  et: cetera

Do not use volumes: to overwrite the code in your images. Do have your CI system test your images as built, running the exact image you're getting ready to deploy; no bind mounts or extra artificial test code. The question mentions "npm build inside containers"; run all of these build steps during the docker build phase and specify them in your Dockerfile, so you don't need to run these at deploy time.

When you have a new commit in your repo, build new images. This can happen on a separate system; it can happen in parallel with your running system. If you use a unique tag per image then it's more obvious that you're building a new image that's different from the running image. (In principle you can use a single ...:latest tag but I wouldn't recommend it.)

# Choose a tag; let's pick something based on a timestamp
export TAG=20200117.01

# Build the images
docker-compose build

# Push the images to a repository
# (Recommended; required if you're building somewhere
# other than the deployment system)
docker-compose push

Now you're at a point where you've built new images, but you're still running containers based on old images. You can tell Docker Compose to update things now. If you docker-compose pull images up front (or if you built them on the same system) then this just consists of stopping the existing containers and starting new ones. This is the only downtime point.

# Name the tag you want to deploy (same as above)
export TAG=20200117.01

# Pre-pull the images
docker-compose pull

# ==> During every step up to this point the existing system
# ==> is running undisturbed

# Ask Compose to replace the existing containers
# ==> This command is the only one that has any downtime
docker-compose up -d

(Why is the unique tag important? Say a mistake happens, and build 20200117.02 has a critical bug. It's very easy to set the tag back to the earlier 20200117.01 and re-run the deploy, so roll back the deployed system without doing a git revert and rebuilding the code. If you're looking at cluster managers like Kubernetes, the changed tag value is a signal to a Kubernetes Deployment object that something has updated, so this triggers an automatic redeployment.)

Upvotes: 2

leeman24
leeman24

Reputation: 2889

While I do think that switching to Kubernetes (or maybe Docker Swarm which I don't have experience with) would be the best option, YES you can build your docker images and then restart.

You just need to run the docker-compose build command. See below:

DIR=$(dirname $(readlink -f $0))
rm -rf $DIR/app
git clone [email protected]:adam/myproject.git $DIR/app
cd $DIR/app && \
   docker-compose build && \
   docker-compose down --remove-orphans && \
   docker-compose up -d

Upvotes: 0

Adam
Adam

Reputation: 15

Only problem really was docker-compose down before docker-compose build. I deleted down command and downtime is a few seconds now. I thought, build shutdowns running containers before building automatically. I don't know why. Thanks Noé for idea! I'm idiot.

Upvotes: 0

Noé
Noé

Reputation: 508

This long time can come from multiple things:

  • Your application ignore the stop signal, docker-compose wait for them to terminate before killing them. Check that your container are well exiting without waiting the kill signal.
  • Your Dockerfile is wrongly ordered. Docker have built-in cache for every step but if an earlier step changed then it have do make every steps again. I recommend you to look carefuly when you copy files it's often this that break cache.
  • Run docker-compose build before putting down containers. Be careful about mounted volumes, if docker can't get the context it will failed

Upvotes: -1

Related Questions