Reputation: 34236
How can one access an external database from a container? Is the best way to hard code in the connection string?
# Dockerfile
ENV DATABASE_URL amazon:rds/connection?string
Upvotes: 1499
Views: 2227695
Reputation: 5627
Using docker-compose
, you can inherit environment variables in docker-compose.yml
and subsequently any Dockerfile(s) called by docker-compose
to build images. This is useful when the Dockerfile
RUN
command should execute commands specific to the environment.
(Your shell has RAILS_ENV=development
already existing in the environment)
docker-compose.yml:
version: '3.1'
services:
my-service:
build:
#$RAILS_ENV is referencing the shell environment RAILS_ENV variable
#and passing it to the Dockerfile ARG RAILS_ENV
#the syntax below ensures that the RAILS_ENV arg will default to
#production if empty.
#note that is dockerfile: is not specified it assumes file name: Dockerfile
context: .
args:
- RAILS_ENV=${RAILS_ENV:-production}
environment:
- RAILS_ENV=${RAILS_ENV:-production}
Dockerfile:
FROM ruby:2.3.4
#give ARG RAILS_ENV a default value = production
ARG RAILS_ENV=production
#assign the $RAILS_ENV arg to the RAILS_ENV ENV so that it can be accessed
#by the subsequent RUN call within the container
ENV RAILS_ENV $RAILS_ENV
#the subsequent RUN call accesses the RAILS_ENV ENV variable within the container
RUN if [ "$RAILS_ENV" = "production" ] ; then echo "production env"; else echo "non-production env: $RAILS_ENV"; fi
This way, I don't need to specify environment variables in files or docker-compose
build
/up
commands:
docker-compose build
docker-compose up
Upvotes: 57
Reputation: 21
Inside your dockerfile, Add
ENV variable_name=variable_value
Upvotes: 1
Reputation: 26972
You can pass environment variables to your containers with the -e
(alias --env
) flag.
docker run -e xx=yy
An example from a startup script:
sudo docker run -d -t -i -e REDIS_NAMESPACE='staging' \
-e POSTGRES_ENV_POSTGRES_PASSWORD='foo' \
-e POSTGRES_ENV_POSTGRES_USER='bar' \
-e POSTGRES_ENV_DB_NAME='mysite_staging' \
-e POSTGRES_PORT_5432_TCP_ADDR='docker-db-1.hidden.us-east-1.rds.amazonaws.com' \
-e SITE_URL='staging.mysite.com' \
-p 80:80 \
--link redis:redis \
--name container_name dockerhub_id/image_name
Or, if you don't want to have the value on the command-line where it will be displayed by ps
, etc., -e
can pull in the value from the current environment if you just give it without the =
:
sudo PASSWORD='foo' docker run [...] -e PASSWORD [...]
If you have many environment variables and especially if they're meant to be secret, you can use an env-file:
$ docker run --env-file ./env.list ubuntu bash
The --env-file flag takes a filename as an argument and expects each line to be in the VAR=VAL format, mimicking the argument passed to --env. Comment lines need only be prefixed with #
Upvotes: 2316
Reputation: 2487
There are several ways to pass environment variables to the container including using docker-compose (best choice if possible).
I recommend using an env file for easier organization and maintenance.
EXAMPLE (docker-compose
CLI)
docker-compose -f docker-compose.yml --env-file ./.env up
EXAMPLE (docker
CLI)
docker run -it --name "some-ctn-name" --env-file ./.env "some-img-name:Dockerfile"
IMPORTANT: The docker
CLI has some limitations regarding (see below) environment variables.
The docker run
subcommand strangely does not accept env files formatted as valid BASH ("Shell") scripts so it considers surrounding quotes and double quotes as part of the value of environment variables, so the container will get the value of (in an env file, for example)...
SOME_ENV_VAR_A="some value a"
... as "some value a"
and not some value a
. Other than that, we'll have problems using the same env file in other contexts (including BASH itself). 🙄
This is quite strange behavior since .env files are regular BASH ("Shell") scripts.
However, BASH ("Shell") offers us powerful features, so let's use it to our advantage in a workaround solution.
My solution involves a Dockerfile, an env file, a BASH script file and the run
subcommand (docker run
) in a special way.
The strategy consists of injecting your environment variables using another environment variable set in the run
subcommand and using the container itself to set these variables.
EXAMPLE
FROM python:3.10-slim-buster
WORKDIR /some-name
COPY . /some-name/
RUN apt-get -y update \
&& apt-get -y upgrade \
[...]
ENTRYPOINT bash entrypoint.bash
EXAMPLE
#!/bin/bash
# Some description a
SOME_ENV_VAR_A="some value a"
# Some description b
SOME_ENV_VAR_B="some value b"
# Some description c
SOME_ENV_VAR_C="some value c"
[...]
EXAMPLE
#!/bin/bash
set -a;source <(echo -n "$ENV_VARS");set +a
python main.py
EXAMPLE
docker run -it --name "some-ctn-name" --env ENV_VARS="$(cat ./.env)" "some-img-name:Dockerfile"
The docker-compose does not have this problem as it uses YAML. YAML does not consider surrounding quotes and double quotes as part of the value of environment variables, which is something that is not done with docker run
subcommand.
Tks! 😎
Upvotes: 10
Reputation: 4039
Use -e
or --env
value to set environment variables (default []).
An example from a startup script:
docker run -e myhost='localhost' -it busybox sh
If you want to use multiple environments from the command line then before every environment variable use the -e
flag.
Example:
sudo docker run -d -t -i -e NAMESPACE='staging' -e PASSWORD='foo' busybox sh
Note: Make sure put the container name after the environment variable, not before that.
If you need to set up many variables, use the --env-file
flag
For example,
$ docker run --env-file ./my_env ubuntu bash
For any other help, look into the Docker help:
$ docker run --help
Official documentation: https://docs.docker.com/compose/environment-variables/
Upvotes: 113
Reputation: 1715
Easiest solution: Just run these commands
sudo docker container run -p 3306:3306 -e MYSQL_RANDOM_ROOT_PASSWORD=yes --name mysql -d mysql
sudo docker container logs mysql
What is happening there?
Explicit solution: Here we can not only pass our own password and database name but also can create a specific network through which any application will interact with this database. Moreover, we can also access the docker database and see it's contents. Please see below
docker network create todo-app
docker run -d \
--network todo-app --network-alias mysql \
-v todo-mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret \
-e MYSQL_DATABASE=todos \
mysql:8.0
docker exec -it <mysql-container-id> mysql -u root -p
SHOW DATABASES;
Upvotes: 1
Reputation: 11
Ex:- Suppose You have a use case to start MySQL database container so you need to pass following variables
docker run -dit --name db1 -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=mydb -e MYSQL_USER=jack -e MYSQL_PASSWORD=redhat mysql:5.7
Upvotes: 0
Reputation: 1543
The problem I had was that I was putting the --env-file at the end of the command
docker run -it --rm -p 8080:80 imagename --env-file ./env.list
Fix
docker run --env-file ./env.list -it --rm -p 8080:80 imagename
The reason this is the case is because the docker run command has the below signature. You can see that the options come before the image name. Image name feels like an option but it is a parameter to the run command.
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Upvotes: 20
Reputation: 100426
Using jq to convert the environment to JSON:
env_as_json=`jq -c -n env`
docker run -e HOST_ENV="$env_as_json" <image>
This requires jq version 1.6 or newer.
This puts the host environment as a JSON file, essentially like so in Dockerfile:
ENV HOST_ENV (all environment from the host as JSON)
Upvotes: 2
Reputation: 4669
There is a nice hack how to pipe host machine environment variables to a Docker container:
env > env_file && docker run --env-file env_file image_name
Use this technique very carefully, because
env > env_file
will dump ALL host machine ENV variables toenv_file
and make them accessible in the running container.
Upvotes: 28
Reputation: 12100
You can pass using -e
parameters with the docker run ..
command as mentioned here and as mentioned by errata.
However, the possible downside of this approach is that your credentials will be displayed in the process listing, where you run it.
To make it more secure, you may write your credentials in a configuration file and do docker run
with --env-file
as mentioned here. Then you can control the access of that configuration file so that others having access to that machine wouldn't see your credentials.
Upvotes: 149
Reputation: 319
You can use -e or --env as an argument, followed by a key-value format.
For example:
docker build -f file_name -e MYSQL_ROOT_PASSWORD=root
Upvotes: 3
Reputation: 57
Here is how I was able to solve it:
docker run --rm -ti -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -e AWS_SECURITY_TOKEN amazon/aws-cli s3 ls
One more example:
export VAR1=value1
export VAR2=value2
docker run --env VAR1 --env VAR2 ubuntu env | grep VAR
Output:
VAR1=value1
VAR2=value2
Upvotes: 2
Reputation: 1054
We can also use host machine environment variables using the -e flag and $:
Before running the following command, we need to export (means set) local environment variables.
docker run -it -e MG_HOST=$MG_HOST \
-e MG_USER=$MG_USER \
-e MG_PASS=$MG_PASS \
-e MG_AUTH=$MG_AUTH \
-e MG_DB=$MG_DB \
-t image_tag_name_and_version
By using this method, you can set the environment variables automatically with your given name. In my case, MG_HOST and MG_USER.
If you are using Python, you can access these environment variables inside Docker by:
import os
host = os.environ.get('MG_HOST')
username = os.environ.get('MG_USER')
password = os.environ.get('MG_PASS')
auth = os.environ.get('MG_AUTH')
database = os.environ.get('MG_DB')
Upvotes: 42
Reputation: 54989
There are some documentation inconsistencies for setting environment variables with docker run
.
The online referece says one thing:
--env , -e Set environment variables
The manpage is a little different:
-e, --env=[] Set environment variables
The docker run --help
gives something else again:
-e, --env list Set environment variables
Something that isn't necessarily clear in any of the available documentation:
A trailing space after -e
or --env
can be replaced by =
, or in the case of -e
can be elided altogether:
$ docker run -it -ekey=value:1234 ubuntu env
key=value:1234
A trick that I found by trial and error (and clues in the above)...
If you get the error:
unknown flag: --env
Then you may find it helpful to use an equals sign with --env
, for example:
--env=key=value:1234
Different methods of launching a container may have different parsing scenarios.
These tricks may be helpful when using Docker in various composing configurations, such as Visual Studio Code devcontainer.json, where spaces are not allowed in the runArgs
array.
Upvotes: 1
Reputation: 1250
For passing multiple environment variables via docker-compose an environment file can be used in docker-compose file as well.
web:
env_file:
- web-variables.env
https://docs.docker.com/compose/environment-variables/#the-env_file-configuration-option
Upvotes: 7
Reputation: 103
docker run --rm -it --env-file <(bash -c 'env | grep <your env data>')
Is a way to grep the data stored within a .env
and pass them to Docker, without anything being stored unsecurely (so you can't just look at docker history
and grab keys.
Say you have a load of AWS stuff in your .env
like so:
AWS_ACCESS_KEY: xxxxxxx
AWS_SECRET: xxxxxx
AWS_REGION: xxxxxx
running docker with docker run --rm -it --env-file <(bash -c 'env | grep AWS_')
will grab it all and pass it securely to be accessible from within the container.
Upvotes: 8
Reputation: 674
If you have the environment variables in an env.sh
locally and want to set it up when the container starts, you could try
COPY env.sh /env.sh
COPY <filename>.jar /<filename>.jar
ENTRYPOINT ["/bin/bash" , "-c", "source /env.sh && printenv && java -jar /<filename>.jar"]
This command would start the container with a bash shell (I want a bash shell since source
is a bash command), sources the env.sh
file(which sets the environment variables) and executes the jar file.
The env.sh
looks like this,
#!/bin/bash
export FOO="BAR"
export DB_NAME="DATABASE_NAME"
I added the printenv
command only to test that actual source command works. You should probably remove it when you confirm the source command works fine or the environment variables would appear in your docker logs.
Upvotes: 6
Reputation: 1961
If you are using 'docker-compose' as the method to spin up your container(s), there is actually a useful way to pass an environment variable defined on your server to the Docker container.
In your docker-compose.yml
file, let's say you are spinning up a basic hapi-js container and the code looks like:
hapi_server:
container_name: hapi_server
image: node_image
expose:
- "3000"
Let's say that the local server that your docker project is on has an environment variable named 'NODE_DB_CONNECT' that you want to pass to your hapi-js container, and you want its new name to be 'HAPI_DB_CONNECT'. Then in the docker-compose.yml
file, you would pass the local environment variable to the container and rename it like so:
hapi_server:
container_name: hapi_server
image: node_image
environment:
- HAPI_DB_CONNECT=${NODE_DB_CONNECT}
expose:
- "3000"
I hope this helps you to avoid hard-coding a database connect string in any file in your container!
Upvotes: 86
Reputation: 30941
Another way is to use the powers of /usr/bin/env
:
docker run ubuntu env DEBUG=1 path/to/script.sh
Upvotes: 7
Reputation: 882
For Amazon AWS ECS/ECR, you should manage your environment variables (especially secrets) via a private S3 bucket. See blog post How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker.
Upvotes: 6