Reputation: 923
I want to know how I can allow a "child" (sibling) docker container to access some subdirectory of an already mounted volume. As an explanation, this is a simple setup:
I have the following Dockerfile, which just installs Docker in a Docker container:
FROM ubuntu
RUN apt-get update && apt-get install -y curl
RUN curl -fsSL https://get.docker.com/ | sh
I have the following data directory on my host machine
/home/user/data/
data1.txt
subdir/
data2.txt
Build the parent image:
[host]$> docker build -t parent .
Then run the parent container:
[host]$> docker run --rm --name parent -it -v /home/user/data/:/data/ -v /var/run/docker.sock:/var/run/docker.sock parent
Now I have a running container, and am "inside" the new container. Since I have the docker socket bound to the parent, I am able to run docker commands to create "child" containers, which are actually sibling containers. The data volume has been successfully mapped:
[parent]$> ls /data/
subdir data1.txt
Now I want to create a sibling container that can only see the subdir directory:
[parent]$> docker run --rm --name child -it -v /data/subdir/:/data/ ubuntu
This creates a sibling container, and I am successfully "inside" the container, however the new data directory is empty. My assumption is because the volume I tell it to use "/data/" is mapped by the host to a directory that doesn't exist on the host, rather than using the volume defined when running the parent.
[child]$> ls /data/
<nothing>
What can I do to allow this mapping to work, so that the child can create files in the subdirectory, and that the parent container can see and access these files? The child is not allowed to see data1.txt (or anything else above the subdirectory).
Upvotes: 6
Views: 4435
Reputation: 74720
"Sibling" container is the correct term, there is no direct relationship between what you have labeled the "parent" and "child" containers, even though you ran the docker
command in one of the containers.
The container with the docker socket mounted still controls the dockerd
running on the host, so any paths sent to dockerd
via the API will be in the hosts scope.
There are docker
commands where using the containers filesystem does change things. This is when the docker
utility itself is accessing the local file system. docker build
docker cp
docker import
docker export
are examples where docker
interacts with the local file system.
Use -v /home/user/data/subdir:/data
for the second container
docker run --name parent_volume \
-it --rm -v /home/user/data:/data ubuntu
docker run --name child_volume \
-it --rm -v /home/user/data/subdir:/data ubuntu
The processes you run need to be careful with what is writing to data mounted into multiple containers so data doesn't get clobbered.
Upvotes: 4