Nguyên Phan
Nguyên Phan

Reputation: 133

How to update files between 2 container with shared volume

Situation

Let's say, we have two containers, A and B.

A and B have shared name volume as: shared-volume. A container have shared-volume mount to /root/shareA. B container have shared-volume mount to /root/shareB.

This mean, when we go to B container and create file "example.txt" in /root/shareB. At A container, we can have access to "example.txt", making A container and B container have a shared volume thourgh shared-volume.

I am using docker-compose for this:

version: '2.1'

volumes:
    shared-volume:

services:
  A:
    image: imageA
    volumes:
        - shared-volume:/root/sharedA

  B:
    image: imageB
    volumes:
        - shared-volume:/root/sharedB

The Problem

I am using imageB as container for storing code & resources, imageA would be some thing like consumer, web server, which will use files in imageB. imageA and imageB share file via named volume (which was shown as above).

As what I have tested, imageB share file successfully with imageA.

Problem is, when I update imageB with newer files, shared volumne's file still stay the same. I have to remove all containers and volumes, start it again, then newer files was applied. Which mean I have to run docker-compose down -v then docker-compose up -d.

imageA was some thing like web server, which should never be downed for any reason.

I wonder if I have mistakes some where or missing somethings to make imageA can get newer code from imageB without shutdown imageA and it's volume?

Thank you.

Upvotes: 4

Views: 1958

Answers (1)

David Maze
David Maze

Reputation: 159371

Docker believes that volumes are there to contain critical user data, and it doesn't know anything about their contents. If you're relying on Docker's behavior where it will populate a named volume on first use from an image, if you change the underlying container, it won't update the volume, since it might be corrupting user data.

That is, in your situation:

  1. Docker Compose creates the shared-volume.
  2. Docker launches B attaching shared-volume to it. Since shared-volume is empty, it is populated from imageB.
  3. You rebuild imageB and re-run docker-compose up.
  4. Docker launches B attaching shared-volume to it. Since shared-volume is not empty, it keeps the content it had from the previous run.

Volumes are not an appropriate place to store code or libraries. In the completely abstract case as you've described it, your code should be built into imageA (which is running it), and when you change it, you should rebuild both images. docker-compose up --build can do this for you.

Your question hints at a layout where the A container is something like an Nginx or Apache Web server, and from its point of view the "code and images" are just static JavaScript and PNG files that it's serving; they are "data". In that setup it would be appropriate to mount a data volume on to /var/www, but whatever build tool produces the output needs to explicitly copy it into the volume. One easy solution (assuming a JavaScript/Webpack project) is to bind-mount your project's dist directory and npm run build on the host, instead of using a container for it.

Since the automatic Docker data copy only happens the very first time the container is run, you might need to manually copy data at startup time if it's important to have it in a volume. You can do this with an entrypoint script:

#!/bin/sh

# Copy application assets to the shared directory
cp -a ./assets /root/shared

# Run the CMD as the main container process
exec "$@"
# At the end of your Dockerfile
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD as before

Upvotes: 4

Related Questions