Dovid Gefen
Dovid Gefen

Reputation: 403

How to handle updating docker-compose based application in production

I have a docker-compose based application which I am deploying to production server. Two of its containers share a directories contents using a data volume like so:

...
services:
  service1:
    volumes:
      - server-files:/var/www

  service2:
    volumes:
      - server-files:/var/www

  db:
    volumes:
      - db-persistent:/var/lib/mysql


volumes:
  server-files:
  db-persistent:

The service1's /var/www is populated when its Dockerfile is built. My understanding is that if I make changes to code stored in /var/ww when I rebuild service1 its updates will be hidden by the existing server-files volume.

What is the correct way to update this deployment so that changes propagate with minimal downtime and without deleting other volumes?

Edit

Just to clarify my current deploy process works as follows:

  1. Update code locally and and commit/push changes to Github
  2. Pull changes on server
  3. Run docker-compose build to rebuild any changed containers
  4. Run docker-compose up -d to reload any updated containers

The issue is that changed code within /var/www is hidden by the already existing named volume server-files. My question is what is the best way to handle this update?

Upvotes: 1

Views: 3074

Answers (2)

Charles Desbiens
Charles Desbiens

Reputation: 1079

First of all, docker-compose isn't meant for production deployment. This issue illustrates one of the reasons why: no automatic rolling upgrades. Creating a single node swarm would make your life easier. To deploy, all you would have to do is run docker stack deploy -f docker-compose.yml. However, you might have to tweak your compose file and do some initial setup.

Second of all, you are misunderstanding how docker is meant to be used. Creating a volume binding for your application code is only a shortcut that you do in development so that you don't have to rebuild your image every time you change your code. When you deploy your application however, you build a production image of your application that contains all the code needed to run.

Once this production image is built, you push it up to an image repository (probably docker hub). Your production server pulls the image from that repository, and uses it to create a container that runs your application.

IF you're pulling your application code from your production server, then why use Docker at all? In that scenario, it's just making your life harder and adding extra steps when you could just run everything directly on your host VM and make a simple script to stop your apps, pull your code, and restart your apps.

Upvotes: 2

Dovid Gefen
Dovid Gefen

Reputation: 403

I ended up handling this by managing the databases volume db-persistent outside of
docker-compose. Before running docker-compose up I created the volume manually by running
docker volume create db-persistent and in docker-compose.yml I marked the volume as external with the following configuration:

volumes:
  db-persistent:
    external: true

My deploy process now looks as follows:

  1. Pull changes from Github
  2. Run docker-compose build to automatically build any changed containers.
  3. Shutdown existing application and remove volumes by running docker-compose down -v
  4. Run docker-compose up to start application again.

In this new setup running docker-compose down -v only removes the server-files volume leaving the
db-persistent volume untouched.

Upvotes: 2

Related Questions