Benyamin Jafari
Benyamin Jafari

Reputation: 34016

How to run multiple Python scripts and an executable files using Docker?

I want to create a container that is contained with two Python packages as well as a package consist of an executable file.


Here's my main package (dockerized_package) tree:

dockerized_project
├── docker-compose.yml
├── Dockerfile
├── exec_project
│   ├── config
│   │   └── config.json
│   ├── config.json
│   ├── gowebapp
├── pythonic_project1
│   ├── __main__.py
│   ├── requirements.txt
│   ├── start.sh
│   └── utility
│       └── utility.py
└── pythonic_project2
    ├── collect
    │   ├── collector.py
    ├── __main__.py
    ├── requirements.txt
    └── start.sh

Dockerfile content:

FROM ubuntu:18.04
        
RUN apt update
RUN apt-get install -y python3.6 python3-pip python3-dev build-essential gcc \
    libsnmp-dev snmp-mibs-downloader

RUN pip3 install --upgrade pip

RUN mkdir /app
WORKDIR /app
COPY . /app

WORKDIR /app/snmp_collector
RUN pip3 install -r requirements.txt
WORKDIR /app/proto_conversion
RUN pip3 install -r requirements.txt

WORKDIR /app/pythonic_project1
CMD python3 __main__.py

WORKDIR /app/pythonic_project2
CMD python3 __main__.py

WORKDIR /app/exec_project
CMD ["./gowebapp"]

docker-compose content:

version: '3'

services:
  proto_conversion:
      build: .
      image: pc:2.0.0
      container_name: proto_conversion
#      command: 
#        - "bash  snmp_collector/start.sh"
#        - "bash  proto_conversion/start.sh"
      restart: unless-stopped
      ports:
        - 8008:8008
      tty: true

Problem:

When I run this project with docker-compose up --build, only the last CMD command runs. Hence, I think the previous CMD commands are killed in Dockerfile because when I remove the last two CMD, the first CMD works well.

Is there any approach to run multiple Python scripts and an executable file in the background?

I've also tried with the bash files without any success either.

Upvotes: 12

Views: 39850

Answers (3)

Farzad Vertigo
Farzad Vertigo

Reputation: 2818

As mentioned in the documentation, there can be only one CMD in the docker file and if there is more, the last one overrides the others and takes effect. A key point of using docker might be to isolate your programs, so at first glance, you might want to move them to separate containers and talk to each other using a shared volume or a docker network, but if you really need them to run in the same container, including them in a bash script and replacing the last CMD with CMD run.sh will run them alongside each other:

#!/bin/bash

exec python3 /path/to/script1.py &
exec python3 /path/to/script2.py

Add COPY run.sh to the Dockerfile and use RUN chmod a+x run.sh to make it executable. CMD should be CMD ["./run.sh"]

Upvotes: 22

David Maze
David Maze

Reputation: 158898

Best practice is to launch these as three separate containers. That's doubly true since you're taking three separate applications, bundling them into a single container, and then trying to launch three separate things from them.

Create a separate Dockerfile in each of your project subdirectories. These can be simpler, especially for the one that just contains a compiled binary

# execproject/Dockerfile
FROM ubuntu:18.04
WORKDIR /app
COPY . ./
CMD ["./gowebapp"]

Then in your docker-compose.yml file have three separate stanzas to launch the containers

version: '3'
services:
  pythonic_project1:
    build: ./pythonic_project1
    ports:
      - 8008:8008
    env:
      PY2_URL: 'http://pythonic_project2:8009'
      GO_URL: 'http://execproject:8010'
  pythonic_project2:
    build: ./pythonic_project2
  execproject:
    build: ./execproject

If you really can't rearrange your Dockerfiles, you can at least launch three containers from the same image in the docker-compose.yml file:

services:
  pythonic_project1:
    build: .
    workdir: /app/pythonic_project1
    command: ./__main__.py
  pythonic_project2:
    build: .
    workdir: /app/pythonic_project1
    command: ./__main__.py

There's several good reasons to structure your project with multiple containers and images:

  • If you roll your own shell script and use background processes (as other answers have), it just won't notice if one of the processes dies; here you can use Docker's restart mechanism to restart individual containers.
  • If you have an update to one of the programs, you can update and restart only that single container and leave the rest intact.
  • If you ever use a more complex container orchestrator (Docker Swarm, Nomad, Kubernetes) the different components can run on different hosts and require a smaller block of CPU/memory resource on a single node.
  • If you ever use a more complex container orchestrator, you can individually scale up components that are using more CPU.

Upvotes: 3

frankegoesdown
frankegoesdown

Reputation: 1924

try it via entrypoint.sh

ENTRYPOINT ["/docker_entrypoint.sh"]

docker_entrypoint.sh

#!/bin/bash

set -e

exec python3 not__main__.py &
exec python3 __main__.py 

symbol & says that you run service as daemon in background

Upvotes: 5

Related Questions