Reputation: 27278
I want to do something like this where I can run multiple commands in the following code:
db:
image: postgres
web:
build: .
command: python manage.py migrate
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
How could I execute multiple commands?
Upvotes: 1012
Views: 1261044
Reputation: 165
I tried different combinations but the one that is not explicitly shown here is:
entrypoint: ["bash", "-c"]
command: |
'
echo "Container running"
sleep infinity
'
The entrypoint expects one big string to compile. |
preserves line breaks exactly as they appear in the string.
Upvotes: -1
Reputation: 3225
I want to run cronjobs in Docker executing things in other container, which lead to this pure docker-compose based solution without having to build a custom image:
services:
cronjobs:
image: docker:27-cli
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
TZ: Europe/Berlin
CRONTAB: |
* * * * * docker ps -a
command: >
sh -ceux "
echo \"$${CRONTAB}\" > /etc/crontabs/crontab;
crontab /etc/crontabs/crontab;
crond -f
"
Upvotes: 0
Reputation: 467
If you need the first service to wait for the second one and not just start in order you can do a health check. Something like:
db:
image: postgres
container_name: postgres_test
ports: "8000:8000"
healthcheck:
test: ["CMD-SHELL", "pg_isready", "-h", "localhost", "-p", "5432"]
interval: 5s
timeout: 3s
retries: 10
web:
build: .
command: command1 && comand2
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
postgres_test:
condition: service_healthy
This will make sure the second service is run only when the first service is health/reachable by definition of the self defined health check (so in this case as soon as 5432 is reachable on this service)
Upvotes: -2
Reputation: 404
.bat
file (to run with cmd, or you can make a .ps1
to run with powershell if your container have it)command
or entrypoint
use myFile.bat
(or myFile.ps1
)Bellow my docker-compose.yml
:
version: "3.4"
services:
myservicename:
image: mcr.microsoft.com/dotnet/sdk:6.0
container_name: mycontainername
environment:
- PORT=44390
command: buildAndRun.bat
[...]
My buildAndRun.bat
:
dotnet --list-sdks
dotnet build
dotnet run
Upvotes: 1
Reputation: 928
tht's my solution for this problem:
services:
mongo1:
container_name: mongo1
image: mongo:4.4.4
restart: always
# OPTION 01:
# command: >
# bash -c "chmod +x /scripts/rs-init.sh
# && sh /scripts/rs-init.sh"
# OPTION 02:
entrypoint: [ "bash", "-c", "chmod +x /scripts/rs-init.sh && sh /scripts/rs-init.sh"]
ports:
- "9042:9042"
networks:
- mongo-cluster
volumes:
- ~/mongors/data1:/data/db
- ./rs-init.sh:/scripts/rs-init.sh
- api_vol:/data/db
environment:
*env-vars
depends_on:
- mongo2
- mongo3
Upvotes: 14
Reputation: 578
In case anyone else is trying to figure out multiple commands with Windows based containers the following format works:
command: "cmd.exe /c call C:/Temp/script1.bat && dir && C:/Temp/script2.bat && ..."
Including the 'call' directive was what fixed it for me.
Alternatively if each command can execute without previous commands succeeding, just separate each with semicolons:
command: "cmd.exe /c call C:/Temp/script1.bat; dir; C:/Temp/script2.bat; ... "
Upvotes: 0
Reputation: 13929
Building on @Bjorn answer, docker has recently introduced special dependency rules that allows you to wait until the "init container" has exited successfully which gives
db:
image: postgres
web:
image: app
command: python manage.py runserver 0.0.0.0:8000
depends_on:
db:
migration:
condition: service_completed_successfully
migration:
build: .
image: app
command: python manage.py migrate
depends_on:
- db
I'm not sure if you still need buildkit or not, but on my side it works with
DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 docker-compose up
Upvotes: 3
Reputation: 926
There are many great answers in this thread already, however, I found that a combination of a few of them seemed to work best, especially for Debian based users.
services:
db:
. . .
web:
. . .
depends_on:
- "db"
command: >
bash -c "./wait-for-it.sh db:5432 -- python manage.py makemigrations
&& python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000"
Prerequisites: add wait-for-it.sh to your project directory.
Warning from the docs: "(When using wait-for-it.sh) in production, your database could become unavailable or move hosts at any time ... (This solution is for people that) don’t need this level of resilience."
Edit:
This is a cool short term fix but for a long term solution you should try using entrypoints in the Dockerfiles for each image.
Upvotes: 26
Reputation: 9045
I was having same problem where I wanted to run my react app on port 3000 and storybook on port 6006 both in the same containers.
I tried to start both as entrypoint commands from Dockerfile as well as using docker-compose command
option.
After spending time on this, decided to separate these services into separate containers and it worked like charm
Upvotes: 0
Reputation: 1857
Cleanest ?
---
version: "2"
services:
test:
image: alpine
entrypoint: ["/bin/sh","-c"]
command:
- |
echo a
echo b
echo c
Upvotes: 131
Reputation: 6061
Alpine-based images actually seem to have no bash installed, but you can use sh
or ash
which link to /bin/busybox
.
Example docker-compose.yml
:
version: "3"
services:
api:
restart: unless-stopped
command: ash -c "flask models init && flask run"
Upvotes: 6
Reputation: 9131
This works for me:
version: '3.1'
services:
db:
image: postgres
web:
build: .
command:
- /bin/bash
- -c
- |
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
docker-compose tries to dereference variables before running the command, so if you want bash to handle variables you'll need to escape the dollar-signs by doubling them...
command:
- /bin/bash
- -c
- |
var=$$(echo 'foo')
echo $$var # prints foo
...otherwise you'll get an error:
Invalid interpolation format for "command" option in service "web":
Upvotes: 207
Reputation: 9633
I recommend using sh
as opposed to bash
because it is more readily available on most Unix based images (alpine, etc).
Here is an example docker-compose.yml
:
version: '3'
services:
app:
build:
context: .
command: >
sh -c "python manage.py wait_for_db &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
This will call the following commands in order:
python manage.py wait_for_db
- wait for the DB to be readypython manage.py migrate
- run any migrationspython manage.py runserver 0.0.0.0:8000
- start my development serverUpvotes: 302
Reputation: 4161
I run pre-startup stuff like migrations in a separate ephemeral container, like so (note, compose file has to be of version '2' type):
db:
image: postgres
web:
image: app
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
depends_on:
- migration
migration:
build: .
image: app
command: python manage.py migrate
volumes:
- .:/code
links:
- db
depends_on:
- db
This helps things keeping clean and separate. Two things to consider:
You have to ensure the correct startup sequence (using depends_on).
You want to avoid multiple builds which is achieved by tagging it the first time round using build and image; you can refer to image in other containers then.
Upvotes: 244
Reputation: 5496
To run multiple commands in the docker-compose file by using bash -c
.
command: >
bash -c "python manage.py makemigrations
&& python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000"
Upvotes: 13
Reputation: 36260
* UPDATE *
I figured the best way to run some commands is to write a custom Dockerfile that does everything I want before the official CMD is ran from the image.
docker-compose.yaml:
version: '3'
# Can be used as an alternative to VBox/Vagrant
services:
mongo:
container_name: mongo
image: mongo
build:
context: .
dockerfile: deploy/local/Dockerfile.mongo
ports:
- "27017:27017"
volumes:
- ../.data/mongodb:/data/db
Dockerfile.mongo:
FROM mongo:3.2.12
RUN mkdir -p /fixtures
COPY ./fixtures /fixtures
RUN (mongod --fork --syslog && \
mongoimport --db wcm-local --collection clients --file /fixtures/clients.json && \
mongoimport --db wcm-local --collection configs --file /fixtures/configs.json && \
mongoimport --db wcm-local --collection content --file /fixtures/content.json && \
mongoimport --db wcm-local --collection licenses --file /fixtures/licenses.json && \
mongoimport --db wcm-local --collection lists --file /fixtures/lists.json && \
mongoimport --db wcm-local --collection properties --file /fixtures/properties.json && \
mongoimport --db wcm-local --collection videos --file /fixtures/videos.json)
This is probably the cleanest way to do it.
* OLD WAY *
I created a shell script with my commands. In this case I wanted to start mongod
, and run mongoimport
but calling mongod
blocks you from running the rest.
docker-compose.yaml:
version: '3'
services:
mongo:
container_name: mongo
image: mongo:3.2.12
ports:
- "27017:27017"
volumes:
- ./fixtures:/fixtures
- ./deploy:/deploy
- ../.data/mongodb:/data/db
command: sh /deploy/local/start_mongod.sh
start_mongod.sh:
mongod --fork --syslog && \
mongoimport --db wcm-local --collection clients --file /fixtures/clients.json && \
mongoimport --db wcm-local --collection configs --file /fixtures/configs.json && \
mongoimport --db wcm-local --collection content --file /fixtures/content.json && \
mongoimport --db wcm-local --collection licenses --file /fixtures/licenses.json && \
mongoimport --db wcm-local --collection lists --file /fixtures/lists.json && \
mongoimport --db wcm-local --collection properties --file /fixtures/properties.json && \
mongoimport --db wcm-local --collection videos --file /fixtures/videos.json && \
pkill -f mongod && \
sleep 2 && \
mongod
So this forks mongo, does monogimport and then kills the forked mongo which is detached, and starts it up again without detaching. Not sure if there is a way to attach to a forked process but this does work.
NOTE: If you strictly want to load some initial db data this is the way to do it:
mongo_import.sh
#!/bin/bash
# Import from fixtures
# Used in build and docker-compose mongo (different dirs)
DIRECTORY=../deploy/local/mongo_fixtures
if [[ -d "/fixtures" ]]; then
DIRECTORY=/fixtures
fi
echo ${DIRECTORY}
mongoimport --db wcm-local --collection clients --file ${DIRECTORY}/clients.json && \
mongoimport --db wcm-local --collection configs --file ${DIRECTORY}/configs.json && \
mongoimport --db wcm-local --collection content --file ${DIRECTORY}/content.json && \
mongoimport --db wcm-local --collection licenses --file ${DIRECTORY}/licenses.json && \
mongoimport --db wcm-local --collection lists --file ${DIRECTORY}/lists.json && \
mongoimport --db wcm-local --collection properties --file ${DIRECTORY}/properties.json && \
mongoimport --db wcm-local --collection videos --file ${DIRECTORY}/videos.json
mongo_fixtures/*.json files were created via mongoexport command.
docker-compose.yaml
version: '3'
services:
mongo:
container_name: mongo
image: mongo:3.2.12
ports:
- "27017:27017"
volumes:
- mongo-data:/data/db:cached
- ./deploy/local/mongo_fixtures:/fixtures
- ./deploy/local/mongo_import.sh:/docker-entrypoint-initdb.d/mongo_import.sh
volumes:
mongo-data:
driver: local
Upvotes: 15
Reputation: 27278
Figured it out, use bash -c
.
Example:
command: bash -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
Same example in multilines:
command: >
bash -c "python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000"
Or:
command: bash -c "
python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000
"
Upvotes: 1528
Reputation: 3
I ran into this while trying to get my jenkins container set up to build docker containers as the jenkins user.
I needed to touch the docker.sock file in the Dockerfile as i link it later on in the docker-compose file. Unless i touch'ed it first, it didn't yet exist. This worked for me.
Dockerfile:
USER root
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release;
echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce
RUN groupmod -g 492 docker && \
usermod -aG docker jenkins && \
touch /var/run/docker.sock && \
chmod 777 /var/run/docker.sock
USER Jenkins
docker-compose.yml:
version: '3.3'
services:
jenkins_pipeline:
build: .
ports:
- "8083:8083"
- "50083:50080"
volumes:
- /root/pipeline/jenkins/mount_point_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
Upvotes: 0
Reputation: 336
try using ";" to separate the commands if you are in verions two e.g.
command: "sleep 20; echo 'a'"
Upvotes: -8
Reputation: 1190
You can use entrypoint here. entrypoint in docker is executed before the command while command is the default command that should be run when container starts. So most of the applications generally carry setup procedure in entrypoint file and in the last they allow command to run.
make a shell script file may be as docker-entrypoint.sh
(name does not matter) with following contents in it.
#!/bin/bash
python manage.py migrate
exec "$@"
in docker-compose.yml file use it with entrypoint: /docker-entrypoint.sh
and register command as command: python manage.py runserver 0.0.0.0:8000
P.S : do not forget to copy docker-entrypoint.sh
along with your code.
Upvotes: 33
Reputation: 31
Use a tool such as wait-for-it or dockerize. These are small wrapper scripts which you can include in your application’s image. Or write your own wrapper script to perform a more application-specific commands. according to: https://docs.docker.com/compose/startup-order/
Upvotes: 2
Reputation: 10382
If you need to run more than one daemon process, there's a suggestion in the Docker documentation to use Supervisord in an un-detached mode so all the sub-daemons will output to the stdout.
From another SO question, I discovered you can redirect the child processes output to the stdout. That way you can see all the output!
Upvotes: 5
Reputation: 7230
Another idea:
If, as in this case, you build the container just place a startup script in it and run this with command. Or mount the startup script as volume.
Upvotes: 20