Reputation: 403
I have a docker-compose based application which I am deploying to production server. Two of its containers share a directories contents using a data volume like so:
...
services:
service1:
volumes:
- server-files:/var/www
service2:
volumes:
- server-files:/var/www
db:
volumes:
- db-persistent:/var/lib/mysql
volumes:
server-files:
db-persistent:
The service1's /var/www
is populated when its Dockerfile is built.
My understanding is that if I make changes to code stored in /var/ww
when I rebuild service1
its updates will be hidden by the existing server-files volume.
What is the correct way to update this deployment so that changes propagate with minimal downtime and without deleting other volumes?
Edit
Just to clarify my current deploy process works as follows:
docker-compose build
to rebuild any changed containersdocker-compose up -d
to reload any updated containersThe issue is that changed code within /var/www
is hidden by the already existing named volume server-files
. My question is what is the best way to handle this update?
Upvotes: 1
Views: 3074
Reputation: 1079
First of all, docker-compose isn't meant for production deployment. This issue illustrates one of the reasons why: no automatic rolling upgrades. Creating a single node swarm would make your life easier. To deploy, all you would have to do is run docker stack deploy -f docker-compose.yml
. However, you might have to tweak your compose file and do some initial setup.
Second of all, you are misunderstanding how docker is meant to be used. Creating a volume binding for your application code is only a shortcut that you do in development so that you don't have to rebuild your image every time you change your code. When you deploy your application however, you build a production image of your application that contains all the code needed to run.
Once this production image is built, you push it up to an image repository (probably docker hub). Your production server pulls the image from that repository, and uses it to create a container that runs your application.
IF you're pulling your application code from your production server, then why use Docker at all? In that scenario, it's just making your life harder and adding extra steps when you could just run everything directly on your host VM and make a simple script to stop your apps, pull your code, and restart your apps.
Upvotes: 2
Reputation: 403
I ended up handling this by managing the databases volume db-persistent
outside of
docker-compose. Before running docker-compose up
I created the volume manually by runningdocker volume create db-persistent
and in docker-compose.yml I marked the volume as external with the following configuration:
volumes:
db-persistent:
external: true
My deploy process now looks as follows:
docker-compose build
to automatically build any changed containers.docker-compose down -v
docker-compose up
to start application again.In this new setup running docker-compose down -v
only removes the server-files
volume leaving the db-persistent
volume untouched.
Upvotes: 2