Reputation: 410
I am trying to use a folder named tmp as a volume in docker container in order to do it I am using the following docker-compose.yml file
version: "3"
services:
master:
image: singularities/spark
command: start-spark master
hostname: master
ports:
- "6066:6066"
- "7070:7070"
- "8080:8080"
- "50070:50070"
- "7077:7077"
volumes:
- "../data:/tmp/"
deploy:
placement:
constraints:
- node.role == manager
worker:
image: singularities/spark
command: start-spark worker master
environment:
SPARK_WORKER_CORES: 1
SPARK_WORKER_MEMORY: 4g
links:
- master
volumes:
- "../data:/tmp/"
tmp folder exist in singularities/spark image. After I run following command, folders and files under tmp folder are deleted.
docker-compose up -d
Upvotes: 2
Views: 5883
Reputation: 159
This does only work with single files
volumes:
- "../data/config.properties:/tmp/config.properties"
For example
Upvotes: 1
Reputation: 14843
When you do a docker-compose up -d
, while creating containers docker mounts your ../data
host directory to /tmp
which cleans up /tmp
of the image/container & puts everything you have inside ../data
of the host machine.
You might have to choose some other container path other than /tmp
to ensure it has the data created by singularities/spark
image.
EDIT 1
docker cp
command can help you copy files FROM/TO host/container.
You want to copy from /tmp
of the image to host & then copy host to tmp
(Not sure why you wanna do this, not suggested & extremely rare scenario )
However, you can utilize docker run
with a named volume or a host bind volume to start a container & get the data. Following a docker cp
to copy data FROM & TO the host or container.
Upvotes: 1
Reputation: 5209
The clue is in the name. The /tmp folder gets cleared at boot time (i.e.at container startup). You'll have to use a different folder name if you want persistent data.
Upvotes: 0