Reputation: 956
I've got a gitlab-ci.yml job that brings up two Docker containers. The first container is a standard Redis container, the second runs a custom process that populates the container. The goal of this job is to get the Redis container's dump.rdb
file and pass that along to other jobs. Therefore, my job looks like this:
script:
- cd subdir
- docker run -d --name redis -v $PWD/redis-data:/data redis:latest
- docker run my_custom_image
- docker stop redis
artifacts:
paths:
- subdir/redis-data/dump.rdb
expire_in: 5 minutes
So far so good. my_custom_image
connects to the redis container and reports that it loaded everything correctly, and both containers shutdown cleanly. However, subdir/redis-data
is empty. No dump.rdb
, no nothing. Adding a ls -la subdir/redis-data
after the docker stop
call confirms this. I can run on my machine and everything is working correctly, it's only on gitlab that this breaks.
It looks to me like the gitlab-runner isn't running every step in the same directory, but that doesn't make much sense. Can anyone tell me why the mounted volume isn't getting the dump.rdb?
Upvotes: 0
Views: 1277
Reputation: 956
I found an old bug report here from someone who was having the same problem. It didn't look like anyone was able to fix the issue, but they suggested just doing a docker cp
to pull the data out of the running container. So I did:
script:
- cd subdir
- docker run -d --name redis -v $PWD/redis-data:/data redis:latest
- docker run my_custom_image
- echo SAVE | docker exec -i redis redis-cli
- docker cp redis:/data/dump.rdb redis-data/dump.rdb
- docker stop redis
artifacts:
paths:
- subdir/redis-data/dump.rdb
expire_in: 5 minutes
The SAVE
command was to ensure redis had flushed everything to disk before running docker cp
to pull the dump file. This is working fine for me, wish it was more elegant but there you go.
Upvotes: 1