Jaaaaaaay
Jaaaaaaay

Reputation: 1975

Use docker run command to pass arguments to CMD in Dockerfile

I have a nodejs app can take two parameters when started. For example, I can use

node server.js 0 dev

or

node server.js 1 prod

to switch between production mode and dev mode and determine if it should turn the cluster on. Now I want to create docker image with arguments to do the similar thing. The only thing I can do so far is to adjust the Dockerfile to have a line

CMD [ "node", "server.js", "0", "dev"]

and

docker build -t me/app .
docker run -p 9000:9000 -d me/app

to build and run the docker image.

But If I want to switch to prod mode, I need to change the Dockerfile CMD to be:

CMD [ "node", "server.js", "1", "prod"]

and I need to kill the old one listening on port 9000 and rebuild the image. I wish I can have something like

docker run -p 9000:9000 environment=dev cluster=0 -d me/app 

to create an image and run the nodejs command with "environment" and "cluster" arguments, so I don't need to change the Dockerfile and rebuild the docker any more. How can I accomplish this?

Upvotes: 95

Views: 204543

Answers (8)

XDS
XDS

Reputation: 4188

Late joining the discussion. Here's a nifty trick you can use to set default command line parameters while also supporting overriding the default arguments with custom ones:

Step#1 In your dockerfile invoke your program like so:

ENV DEFAULT_ARGS "--some-default-flag=123 --foo --bar"
CMD ["/bin/bash", "-c", "./my-nifty-executable   ${ARGS:-${DEFAULT_ARGS}}"]

Step#2 When can now invoke the docker-image like so:

# this will invoke it with DEFAULT_ARGS
docker run mydockerimage   

# but this will invoke the docker image with custom arguments
docker run   --env  ARGS="--alternative-args  --and-then-some=123"   mydockerimage 

You can also adjust this technique to do much more complex argument-evaluation however you see fit. Bash supports many kinds of one-line constructs to help you towards that goal.

Hope this technique helps some folks out there save a few hours of head-scratching.

Upvotes: 7

VonC
VonC

Reputation: 1323045

Make sure your Dockerfile declares an environment variable with ENV:

ENV environment default_env_value
ENV cluster default_cluster_value

The ENV <key> <value> form can be replaced inline.

Then you can pass an environment variable with docker run. Note that each variable requires a specific -e flag to run.

docker run -p 9000:9000 -e environment=dev -e cluster=0 -d me/app

Or you can set them through your compose file:

node:
  environment:
    - environment=dev
    - cluster=0

Your Dockerfile CMD can use that environment variable, but, as mentioned in issue 5509, you need to do so in a sh -c form:

CMD ["sh", "-c", "node server.js ${cluster} ${environment}"]

The explanation is that the shell is responsible for expanding environment variables, not Docker. When you use the JSON syntax, you're explicitly requesting that your command bypass the shell and be executed directly.

Same idea with Builder RUN (applies to CMD as well):

Unlike the shell form, the exec form does not invoke a command shell.
This means that normal shell processing does not happen.

For example, RUN [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: RUN [ "sh", "-c", "echo $HOME" ].

When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.

As noted by Pwnstar in the comments:

When you use a shell command, this process will not be started with id=1.

Upvotes: 143

Kevin O.
Kevin O.

Reputation: 448

I found this at docker-compose not setting environment variables with flask

docker-compose.yml

version: '2'

services:
  app:
      image: python:2.7
      environment:
        - BAR=FOO
      volumes:
        - ./app.py:/app.py
      command: python app.py

app.py

import os

print(os.environ["BAR"])

Upvotes: 2

Navap
Navap

Reputation: 1072

Not sure if this helps but I have used it this way and it worked like a charm

CMD ["node", "--inspect=0.0.0.0:9229", "--max-old-space-size=256", "/home/api/index.js"]

Upvotes: 3

Black
Black

Reputation: 10397

Option 1) Use ENV variable

Dockerfile

# we need to specify default values
ENV ENVIRONMENT=production
ENV CLUSTER=1

# there is no need to use parameters array
CMD node server.js ${CLUSTER} ${ENVIRONMENT}

Docker run

$ docker run -d -p 9000:9000 -e ENVIRONMENT=dev -e CLUSTER=0 -me/app

Option 2) Pass arguments

Dockerfile

# use entrypoint instead of CMD and do not specify any arguments
ENTRYPOINT node server.js

Docker run

Pass arguments after docker image name

$ docker run -p 9000:9000 -d me/app 0 dev

Upvotes: 18

Ryan Augustine
Ryan Augustine

Reputation: 1544

Going a bit off topic, build arguments exist to allow you to pass in arguments at build time that manifest as environment variables for use in your docker image build process:

$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .

Upvotes: 4

Roman
Roman

Reputation: 20246

Another option is to use ENTRYPOINT to specify that node is the executable to run and CMD to provide the arguments. The docs have an example in Exec form ENTRYPOINT example.

Using this approach, your Dockerfile will look something like

FROM ...

ENTRYPOINT [ "node",  "server.js" ]
CMD [ "0", "dev" ]

Running it in dev would use the same command

docker run -p 9000:9000 -d me/app

and running it in prod you would pass the parameters to the run command

docker run -p 9000:9000 -d me/app 1 prod

You may want to omit CMD entirely and always pass in 0 dev or 1 prod as arguments to the run command. That way you don't accidentally start a prod container in dev or a dev container in prod.

Upvotes: 75

Paul
Paul

Reputation: 36319

The typical way to do this in Docker containers is to pass in environment variables:

docker run -p 9000:9000 -e NODE_ENV=dev -e CLUSTER=0 -d me/app

Upvotes: 6

Related Questions