Reputation: 1571
I have two docker-compose.yml files, one to setup the container and the other for any subsequent run of the container:
docker-compose.setup.yml:
version: '3'
services:
db:
image: "postgres:11.1"
env_file:
- ./volumes/postgres_config/env_file
networks:
- db_nw
pyramid_app:
image: image_from_dockerfile
env_file:
- ./volumes/postgres_config/env_file
volumes:
- ./volumes/pyramid_app:/app/src
image: image_from_dockerfile
working_dir: /app
expose:
- 6543
command: >
sh -c "/app/venv/bin/pip install -r /app/src/requirements.pip &&
/app/venv/bin/pip install -e '/app/src[testing]' &&
/app/venv/bin/pserve /app/src/development.ini --reload"
networks:
- db_nw
- web_nw
depends_on:
- db
nginx:
image: nginx:1.13.5
ports:
- "6543:80"
volumes:
- ./volumes/nginx_config:/etc/nginx/conf.d
networks:
- web_nw
depends_on:
- pyramid_app
networks:
db_nw:
driver: bridge
web_nw:
driver: bridge
volumes:
conf.d:
src:
docker-compose.yml:
version: '3'
services:
db:
image: "postgres:11.1"
env_file:
- ./volumes/postgres_config/env_file
networks:
- db_nw
pyramid_app:
image: image_from_dockerfile
env_file:
- ./volumes/postgres_config/env_file
volumes:
- ./volumes/pyramid_app:/app/src
image: image_from_dockerfile
working_dir: /app
expose:
- 6543
command: /app/venv/bin/pserve /app/src/development.ini --reload
networks:
- db_nw
- web_nw
depends_on:
- db
nginx:
image: nginx:1.13.5
ports:
- "6543:80"
volumes:
- ./volumes/nginx_config:/etc/nginx/conf.d
networks:
- web_nw
depends_on:
- pyramid_app
networks:
db_nw:
driver: bridge
web_nw:
driver: bridge
volumes:
conf.d:
src:
The docker-compose.setup.yml runs fine and starts my webapp, but I'm getting a "no such file or dir" error anytime I try to run the subsequent docker-compose.yml file:
PS C:\Users\Raj\Projects\github_example> docker-compose up
Starting 81f076500a73_github_example_db_1 ... done
Recreating bc2fafc2039d_github_example_pyramid_app_1 ... error
ERROR: for bc2fafc2039d_github_example_pyramid_app_1 Cannot start service pyramid_app: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"/app/venv/bin/pserve\": stat /app/venv/bin/pserve: no such file or directory": unknown
ERROR: for pyramid_app Cannot start service pyramid_app: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"/app/venv/bin/pserve\": stat /app/venv/bin/pserve: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.
Also, this is my Dockerfile:
FROM ubuntu:18.04
MAINTAINER Raj <[email protected]>
ENV PYTHONUNBUFFERED 1
RUN apt-get -yqq update && apt-get install -yqq python3 python3-dev python3-pip python3-venv
RUN mkdir -p /app/venv
RUN python3 -m venv /app/venv
RUN ls /app/venv
RUN /app/venv/bin/pip install --upgrade pip setuptools
WORKDIR /app
Upvotes: 0
Views: 148
Reputation: 159712
A standard use of the Docker tools is to install your application in the Dockerfile. You wouldn't run a separate Docker Compose sequence to build the application; all of the "build" steps would go in the docker build
sequence. Your Dockerfile could look like:
FROM ubuntu:18.04
ENV PYTHONUNBUFFERED 1
# Install system-level dependencies
RUN apt-get -yqq update \
&& apt-get install -yqq python3 python3-dev python3-pip \
&& pip install --upgrade pip setuptools
# Install Python dependencies (in the Docker-isolated Python)
WORKDIR /app/src
COPY pyramid_app/requirements.pip .
RUN pip install -r requirements.pip
# Install the application proper
COPY pyramid_app/ ./
RUN pip install -e '.[testing]'
# Metadata to say how to run the application
EXPOSE 6543
CMD pserve ./development.ini --reload
Now that all of that setup information, the application source, and the default command are in the Docker image proper, you can just run it, without any special run-time setup:
version: '3'
services:
pyramid_app:
build: .
env_file:
- ./volumes/postgres_config/env_file
I would remove the special segregated network setup and just use the single default network Docker Compose creates for you. I wouldn't try to force Docker to do all of its work in host-system directories. You don't need networks:
, volumes:
; expose:
and working_dir:
come from the Dockerfile; and depends_on:
mostly isn't useful (nor for that matter is expose:
).
What's actually going on in your proposed setup is that your Dockerfile creates an empty Python virtual environment in the image; the first docker-compose.yml
file creates a container from that image, installs software in the virtual environment in the container, then discards the container and its installed software; and then the second docker-compose.yml
file starts up a second container from the image with an empty virtual environment.
Upvotes: 1