Reputation: 23516
I am setting up docker-for-windows
on my private pc.
When I set it up a while ago on my office laptop I had the same issue but it just stopped happening.
So I am stuck with this:
I have a docker-working project (on my other computer) with a docker-compose.yml
like this:
version: '2'
services:
web:
depends_on:
- db
build: .
env_file: ./docker-compose.env
command: bash ./run_web_local.sh
volumes:
- .:/srv/project
ports:
- 8001:8001
links:
- db
- rabbit
restart: always
Dockerfile:
### STAGE 1: Build ###
# We label our stage as 'builder'
FROM node:8-alpine as builder
RUN npm set progress=false && npm config set depth 0 && npm cache clean --force
# build backend
ADD package.json /tmp/package.json
ADD package-lock.json /tmp/package-lock.json
RUN cd /tmp && npm install
RUN mkdir -p /backend-app && cp -a /tmp/node_modules /backend-app
### STAGE 2: Setup ###
FROM python:3
# Install Python dependencies
COPY requirements.txt /tmp/requirements.txt
RUN pip3 install -U pip
RUN pip3 install --no-cache-dir -r /tmp/requirements.txt
# Set env variables used in this Dockerfile (add a unique prefix, such as DOCKYARD)
# Local directory with project source
ENV PROJECT_SRC=.
# Directory in container for all project files
ENV PROJECT_SRVHOME=/srv
# Directory in container for project source files
ENV PROJECT_SRVPROJ=/srv/project
# Create application subdirectories
WORKDIR $PROJECT_SRVPROJ
RUN mkdir media static staticfiles logs
# make folders available for other containers
VOLUME ["$PROJECT_SRVHOME/media/", "$PROJECT_SRVHOME/logs/"]
# Copy application source code to SRCDIR
COPY $PROJECT_SRC $PROJECT_SRVPROJ
COPY --from=builder /backend-app/node_modules $PROJECT_SRVPROJ/node_modules
# Copy entrypoint script into the image
WORKDIR $PROJECT_SRVPROJ
# EXPOSE port 8000 to allow communication to/from server
EXPOSE 8000
CMD ["./run_web.sh"]
docker-compose.env:
C_FORCE_ROOT=True
DJANGO_CELERY_BROKER_URL=amqp://admin:mypass@rabbit:5672//
DJANGO_DATABASE_ENGINE=django.db.backends.mysql
DJANGO_DATABASE_NAME=project-db
DJANGO_DATABASE_USER=project-user
DJANGO_DATABASE_PASSWORD=mypassword
DJANGO_DATABASE_HOST=db
DJANGO_ALLOWED_HOSTS=127.0.0.1,localhost
DJANGO_DEBUG=True
DJANGO_USE_DEBUG_TOOLBAR=off
DJANGO_TEST_RUN=off
PYTHONUNBUFFERED=0
run_web_local.sh:
#!/bin/bash
echo django shell commands
python ./manage.py migrate
echo Starting django server on 127.0.0.1:8000
python ./manage.py runserver 127.0.0.1:8000
When I call docker-compose up web
I get the following error:
web_1 | bash: ./run_web_local.sh: No such file or directory
bash run_web_local.sh
from my windows powershell and inside the containerbash
in the command
in the docker-compose
command. And tried with backslash, no dot etc.And: The exact same setup works on my other laptop.
Any ideas? All the two million github posts didn't solve the problem for me.
Thanks!
Update
Removing volumes:
from the docker-compose makes it work like stated here but I don't have an instant mapping. That's kind of important for me...
Upvotes: 13
Views: 37521
Reputation: 31
I'm leaving this here just in case someone else has the issue I had where you're running Linux and have tried all other solutions to no avail.
It turns out that Docker behaves a little funny with mountable drives, specifically, if your drive is not mounted on boot (i.e. through fstab) the container may fail to find files inside it despite still launching. Make sure your drive is mounted on boot, and potentially change its type if its ntfs, etc. to ext4 or something more Linux-y and that should hopefully fix it.
If the issue still persists and you have the drive mounted, you might need to allow it through a 'virtual file share' - which can be done through Docker Desktop through Settings > Resources > File Sharing and adding the folder/s where your file resides in to the Virtual File Shares section. Details seem vague for the way to do this purely through commandline, so I'm not sure if it's entirely possible...
Upvotes: 0
Reputation: 146
I had same issue recently and the problems goes away using any advanced editor and changing line ending to unix style on sh entrypoint scripts.
In my case, not sure why, because git handle it very well depending on linux or windows host I ended up in same situation. If you have files mounted in container and host(in windows) it dependes on how you edit them may be changed to linux from inside container but that affects outside windows host. After that git stated file .sh changed but no additions no deletions. a graphical compare tool showed up that only new line where changed.
Another workaround to trouble shooting you can start your container overriden entrypoint script by sh for example and then from there you can check on the started container how linux sees the entrypoint script, even you can test it and will see exact same error.
Upvotes: 10
Reputation: 159
Do not use -d at the end. Instead of this command docker-compose -f start_tools.yaml up –d Use docker-compose -f start_tools.yaml up
Upvotes: -1
Reputation: 431
You should issue the following command before cloning the repository:
git config --global core.autocrlf false
This will change the line-ending to UNIX-style.
Then clone the repository and proceed.
Upvotes: 5
Reputation: 19
On thing I noticed in Ubuntu 18.04.3 is that you need to install docker-compose on top of docker.io to get docker-compose to work
After I did that, I got docker-compose [filename.yml] up to work without issue
Upvotes: 1
Reputation: 677
If possible can you please provide all the files related to this, so that i can try to reproduce the issue.
Seems like command is not executing is the dir where run_web_local.sh exist.
You can check the current workdir by replacing command in docker-compose.yml as
command: pwd && bash ./run_web_local.sh
Upvotes: 2
Reputation: 36
It may be because the bash file is not in the root path or in the root path of workdir. Check where is it in the container and verify if the path is correct.
Upvotes: 2