Reputation: 1462
I am trying to use docker-machine with docker-compose. The file docker-compose.yml has definitions as follows:
web:
build: .
command: ./run_web.sh
volumes:
- .:/app
ports:
- "8000:8000"
links:
- db:db
- rabbitmq:rabbit
- redis:redis
When running docker-compose up -d
all goes well until trying to execute the command and an error is produced:
Cannot start container b58e2dfa503b696417c1c3f49e2714086d4e9999bd71915a53502cb6ef43936d: [8] System error: exec: "./run_web.sh": stat ./run_web.sh: no such file or directory
Local volumes are not mounted to the remote machine. Whats the recommended strategy to mount the local volumes with the webapps' code?
Upvotes: 90
Views: 75179
Reputation: 819
It can be done witch combination of three tools:
docker-machine mount
, rsync
, inotifywait
TL;DR
Script based on all below is here
Let's say you have your docker-compose.yml
and run_web.sh
in /home/jdcaballerov/web
docker-machine machine:/home/jdcaballerov/web /tmp/some_random_dir
rsync -r /home/jdcaballerov/web /tmp/some_random_dir
Synchronize on every change of files in your directory:
inotifywait -r -m -e close_write --format '%w%f' /home/jdcaballerov/web | while read CHANGED_FILE
do
rsync /home/jdcaballerov/web /tmp/some_random_dir
done
BE AWARE - there are two directories which has same path - one is on your local (host) machine, second is on docker machine.
Upvotes: 1
Reputation: 5420
All other answers were good for the time but now (Docker Toolbox v18.09.3) all works out of the box. You just need to add a shared folder into VirtualBox VM.
Docker Toolbox automatically adds C:\Users
as shared folder /c/Users
under virtual linux machine (using Virtual Box shared folders feature), so if your docker-compose.yml
file is located somewhere under this path and you mount host machine's directories only under this path - all should work out of the box.
For example:
C:\Users\username\my-project\docker-compose.yml
:
...
volumes:
- .:/app
...
The .
path will be automatically converted to absolute path C:\Users\username\my-project
and then to /c/Users/username/my-project
. And this is exactly how this path is seen from the point of view of linux virtual machine (you can check it: docker-machine ssh
and then ls /c/Users/username/my-project
). So, the final mount will be /c/Users/username/my-project:/app
.
All works transparently for you.
But this doesn't work if your host mount path is not under C:\Users
path. For example, if you put the same docker-compose.yml
under D:\dev\my-project
.
This can be fixed easily though.
docker-machine stop
).Open Virtual Box GUI, open Settings of Virtual Machine named default
, open Shared Folders
section and add the new shared folder:
D:\dev
d/dev
Press OK
twice and close Virtual Box GUI.
docker-machine start
).That's all. All paths of host machine under D:\dev
should work now in docker-compose.yml
mounts.
Upvotes: 1
Reputation: 3215
Docker-machine automounts the users directory... But sometimes that just isn't enough.
I don't know about docker 1.6, but in 1.8 you CAN add an additional mount to docker-machine
CLI: (Only works when machine is stopped)
VBoxManage sharedfolder add <machine name/id> --name <mount_name> --hostpath <host_dir> --automount
So an example in windows would be
/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe sharedfolder add default --name e --hostpath 'e:\' --automount
GUI: (does NOT require the machine be stopped)
<machine name>
(default)<host dir>
(e:)<mount name>
(e)Manually mount in boot2docker:
docker-machine ip default
, etc...sudo mkdir -p <local_dir>
sudo mount -t vboxsf -o defaults,uid=`id -u docker`,gid=`id -g docker` <mount_name> <local_dir>
But this is only good until you restart the machine, and then the mount is lost...
Adding an automount to boot2docker:
While logged into the machine
/mnt/sda1/var/lib/boot2docker/bootlocal.sh
, sda1 may be different for you...Add
mkdir -p <local_dir>
mount -t vboxsf -o defaults,uid=`id -u docker`,gid=`id -g docker` <mount_name> <local_dir>
With these changes, you should have a new mount point. This is one of the few files I could find that is called on boot and is persistent. Until there is a better solution, this should work.
Old method: Less recommended, but left as an alternative
/mnt/sda1/var/lib/boot2docker/profile
, sda1 may be different for you...Add
add_mount() {
if ! grep -q "try_mount_share $1 $2" /etc/rc.d/automount-shares ; then
echo "try_mount_share $1 $2" >> /etc/rc.d/automount-shares
fi
}
add_mount <local dir> <mount name>
As a last resort, you can take the slightly more tedious alternative, and you can just modify the boot image.
git -c core.autocrlf=false clone https://github.com/boot2docker/boot2docker.git
cd boot2docker
git -c core.autocrlf=false checkout v1.8.1
#or your appropriate versionrootfs/etc/rc.d/automount-shares
Add try_mount_share <local_dir> <mount_name>
line right before fi at the end. For example
try_mount_share /e e
Just be sure not to set the to anything the os needs, like /bin, etc...
docker build -t boot2docker .
#This will take about an hour the first time :(docker run --rm boot2docker > boot2docker.iso
This does work, it's just long and complicated
docker version 1.8.1, docker-machine version 0.4.0
Upvotes: 95
Reputation: 818
Just thought I'd mention I've been using 18.03.1-ce-win65 (17513) on Windows 10 and I noticed that if you've previously shared a drive and cached the credentials, once you change your password docker will start having the volumes mounted within containers as blank.
It gives no indication that what is actually happening is that it is now failing to access the shared with the old cached credentials. The solution in this scenario is to reset the credentials either through the UI (Settings->Shared drives) or to disable then renable drive sharing and enter the new password.
It would be useful if docker-compose gave an error in these situations.
Upvotes: 0
Reputation: 184
Since October 2017 there is a new command for docker-machine that does the trick, but make sure there is nothing in the directory before executing it, otherwise it might get lost:
docker-machine mount <machine-name>:<guest-path> <host-path>
Check the docs for more information: https://docs.docker.com/machine/reference/mount/
PR with the change: https://github.com/docker/machine/pull/4018
Upvotes: 5
Reputation: 541
I am using docker-machine 0.12.2 with the virtualbox drive on my local machine. I found that there is a directory /hosthome/$(user name)
from where you have access to local files.
Upvotes: 0
Reputation: 22672
Finally figured out how to upgrade Windows Docker Toolbox to v1.12.5 and keep my volumes working by adding a shared folder in Oracle VM VirtualBox
manager and disabling path conversion. If you have Windows 10+ then you're best to use the newer Docker for Windows.
1st the upgrade Pain:
Redis Database Example:
redis:
image: redis:alpine
container_name: redis
ports:
- "6379"
volumes:
- "/var/db/redis:/data:rw"
In Docker Quickstart Terminal ....
docker-machine stop default
- Ensure VM is haultedIn Oracle VM VirtualBox Manager ...
default
VM via or command line
D:\Projects\MyProject\db
=> /var/db
In docker-compose.yml
...
"/var/db/redis:/data:rw"
In Docker Quickstart Terminal ....
COMPOSE_CONVERT_WINDOWS_PATHS=0
(for Toolbox version >= 1.9.0)docker-machine start default
to restart the VM.cd D:\Projects\MyProject\
docker-compose up
should work now. Now creates redis database in D:\Projects\MyProject\db\redis\dump.rdb
Why avoid relative host paths?
I avoided relative host paths for Windows Toolbox as they may introduce invalid '\' chars. It's not as nice as using paths relative to docker-compose.yml
but at least my fellow developers can easily do it even if their project folder is elsewhere without having to hack the docker-compose.yml
file (bad for SCM).
Original Issue
FYI ... Here is the original error I got when I used nice clean relative paths that used to work just fine for older versions. My volume mapping used to be just "./db/redis:/data:rw"
ERROR: for redis Cannot create container for service redis: Invalid bind mount spec "D:\\Projects\\MyProject\\db\\redis:/data:rw": Invalid volume specification: 'D:\Projects\MyProject\db\redis:/data
This breaks for two reasons ..
D:
drive\
characters
docker-compose
adds them and then blames you for it !! COMPOSE_CONVERT_WINDOWS_PATHS=0
to stop this nonsense.I recommend documenting your additional VM shared folder mapping in your docker-compose.yml
file as you may need to uninstall VirtualBox again and reset the shared folder and anyway your fellow devs will love you for it.
Upvotes: 2
Reputation: 73
If you choose the rsync option with docker-machine, you can combine it with the docker-machine ssh <machinename>
command like this:
rsync -rvz --rsh='docker-machine ssh <machinename>' --progress <local_directory_to_sync_to> :<host_directory_to_sync_to>
It uses this command format of rsync, leaving HOST
blank:
rsync [OPTION]... SRC [SRC]... [USER@]HOST:DEST
(http://linuxcommand.org/man_pages/rsync1.html)
Upvotes: 4
Reputation: 3393
After summarize posts here, attached updated script, to create additional host mount point and automount when Virtualbox restart. The working environment brief as below: - Windows 7 - docker-machine.exe version 0.7.0 - VirtualBox 5.0.22
#!env bash
: ${NAME:=default}
: ${SHARE:=c/Proj}
: ${MOUNT:=/c/Proj}
: ${VBOXMGR:=C:\Program Files\Oracle\VirtualBox\VBoxManage.exe}
SCRIPT=/mnt/sda1/var/lib/boot2docker/bootlocal.sh
## set -x
docker-machine stop $NAME
"$VBOXMGR" sharedfolder add $NAME --name c/Proj --hostpath 'c:\' --automount 2>/dev/null || :
docker-machine start $NAME
docker-machine env $NAME
docker-machine ssh $NAME 'echo "mkdir -p $MOUNT" | sudo tee $SCRIPT'
docker-machine ssh $NAME 'echo "sudo mount -t vboxsf -o rw,user $SHARE $MOUNT" | sudo tee -a $SCRIPT'
docker-machine ssh $NAME 'sudo chmod +x /mnt/sda1/var/lib/boot2docker/bootlocal.sh'
docker-machine ssh $NAME 'sudo /mnt/sda1/var/lib/boot2docker/bootlocal.sh'
#docker-machine ssh $NAME 'ls $MOUNT'
Upvotes: 0
Reputation: 179
At the moment I can't really see any way to mount volumes on machines, so the approach by now would be to somehow copy or sync the files you need into the machine.
There are conversations on how to solve this issue on the docker-machine's github repo. Someone made a pull request implementing scp on docker-machine and it's already merged on master, so it's very likely that the next release will include it.
Since it's not yet released, by now I would recommend that if you have your code hosted on github, just clone your repo before you run the app
web:
build: .
command: git clone https://github.com/my/repo.git; ./repo/run_web.sh
volumes:
- .:/app
ports:
- "8000:8000"
links:
- db:db
- rabbitmq:rabbit
- redis:redis
Update: Looking further I found that the feature is already available in the latest binaries, when you get them you'll be able to copy your local project running a command like this:
docker-machine scp -r . dev:/home/docker/project
Being this the general form:
docker-machine scp [machine:][path] [machine:][path]
So you can copy files from, to and between machines.
Cheers!1
Upvotes: 14
Reputation: 4796
Also ran into this issue and it looks like local volumes are not mounted when using docker-machine. A hack solution is to
get the current working directory of the docker-machine instance docker-machine ssh <name> pwd
use a command line tool like rsync
to copy folder to remote system
rsync -avzhe ssh --progress <name_of_folder> username@remote_ip:<result _of_pwd_from_1>.
The default pwd is /root so the command above would be rsync -avzhe ssh --progress <name_of_folder> username@remote_ip:/root
NB: you would need to supply the password for the remote system. You can quickly create one by ssh into the remote system and creating a password.
change the volume mount point in your docker-compose.yml
file from .:/app
to /root/<name_of_folder>:/app
run docker-compose up -d
NB when changes are made locally, don't forget to rerun rsync
to push the changes to the remote system.
Its not perfect but it works. An issue is ongoing https://github.com/docker/machine/issues/179
Other project that attempt to solve this include docker-rsync
Upvotes: 28
Reputation: 103905
I assume the run_web.sh
file is in the same directory as your docker-compose.yml
file. Then the command should be command: /app/run_web.sh
.
Unless the Dockerfile
(that you are not disclosing) takes care of putting the run_web.sh
file into the Docker image.
Upvotes: 0