Reputation: 4743
I am using a volume when running a docker container with something like docker run --rm --network host --volume $PWD/some_dir:/tmp/some_dir ...
. Inside the container I am running Python code which creates the directory /tmp/some_dir
(overrides it in case it's already there) and puts files into there. If I run the docker container on my local dev machine the files are available on my dev machine in /tmp/some_dir
after running the container.
However if I run the same container as part of a gitlab-ci
job ("docker in docker", the container is not the image used for the job itself) the files are not accessible in /tmp/some_dir
(the directory exists).
What could be a reason for the missing files?
Upvotes: 2
Views: 243
Reputation: 159382
The first half of the docker run -v
option, if it's a directory path, is a directory specifically on the host machine. If a container has access to the Docker socket and launches another container, any directory mappings it provides are in the host filesystem's space, not the container filesystem's space. (If you're actually using Docker-in-Docker it is probably the filesystem space of the container running the nested Docker daemon, but it's still not the calling container's filesystem space.)
The most straightforward option is to docker cp
files out of the inner container after it's stopped but before you docker rm
it; it can copy a whole directory tree.
Upvotes: 1
Reputation: 454
Did you checked the good directory on the good server ?
Creating a $PWD/some_dir in a DinD context, The result should be in a some_dir created in docker user home dir in the server running Gitlab CI container.
Upvotes: 0