Starcat
Starcat

Reputation: 701

Is there a way to configure docker compose to share python packages between container services?

I'm trying to dockerize my django app with a couple of celery workers. The django app and the celery workers all create the same container - same python image, same pipenv, same packages installed from pipenv, same app. The only difference is that I want one container to run the django app server, and the other container to run my celery workers.

When I run docker-compose up, docker is copying the app and installing the same python packages once per container. This takes a long time since it's doing the exact same thing 2 times.

I want to know if there is a way I can clone my app and install the packages ONCE, and use this for all 2 containers that would otherwise install the same thing 2 times.

Dockerfile

FROM python:3.5.6
COPY . /app/
WORKDIR /app/
RUN pip install pipenv==2018.11.26
ADD Pipfile Pipfile
RUN pipenv install --deploy --system
EXPOSE 8000

Docker-compose.yml

version: '2'
services:
  app:
    restart: always
    build: .
    expose:
      - "8000"
    container_name: "app"
    image: debian/latest
    links:
      - postgres
      - redis
    depends_on:
      - postgres
      - redis
    ports:
      - '8000:8000'
    networks:
      - network1
      - nginx_network
    volumes:
      - ./:/app
      - ./data:/app/data
      - static_volume:/app/static
      - ./logs:/app/logs
    entrypoint: ["sh", "/app/docker-entrypoint.sh"]
    env_file:
      - .env
    environment:
      - DJANGO_SETTINGS_MODULE=app.settings.production
  celery_default:
    restart: always
    build: .
    container_name: "celery_default"
    networks:
      - network1
    links:
      - redis
      - postgres
    depends_on:
      - postgres
      - redis
      - celerybeat
    volumes:
      - ./:/app
      - ./data:/app/data
      - ./logs:/app/logs
      - ./celery:/app/celery
    env_file:
      - .env
    entrypoint: "celery -A app worker -Q celery -l debug -n celery_worker --concurrency=2 --logfile=./celery/logs/default.log"

Upvotes: 0

Views: 617

Answers (1)

C.Nivs
C.Nivs

Reputation: 13106

What I would do is define in your compose an image, and the other application uses that image:

version: '2'
services:
  app:
    restart: always
    build: .
    image: your-custom-image # Notice I've created a custom image tag here
    expose:
      - "8000"
    container_name: "app"
    links:
      - postgres
      - redis
    depends_on:
      - postgres
      - redis
    ports:
      - '8000:8000'
    networks:
      - network1
      - nginx_network
    volumes:
      - ./:/app
      - ./data:/app/data
      - static_volume:/app/static
      - ./logs:/app/logs
    entrypoint: ["sh", "/app/docker-entrypoint.sh"]
    env_file:
      - .env
    environment:
      - DJANGO_SETTINGS_MODULE=app.settings.production
  celery_default:
    restart: always
    image: your-custom-image # No build directory, just reuse that image
    container_name: "celery_default"
    networks:
      - network1
    links:
      - redis
      - postgres
    depends_on:
      - postgres
      - redis
      - celerybeat
    volumes:
      - ./:/app
      - ./data:/app/data
      - ./logs:/app/logs
      - ./celery:/app/celery
    env_file:
      - .env
    entrypoint: "celery -A app worker -Q celery -l debug -n celery_worker --concurrency=2 --logfile=./celery/logs/default.log"

This way you are only building once, and the other app just uses that built image

Upvotes: 1

Related Questions