user10664542
user10664542

Reputation: 1316

How can I have one Dockerfile, & deploy to many environments, where each container needs a llarge number of runtime variables set unique to each env?

For my project, I have a Dockerfile to build a docker image that needs to be deployed to four environments.

dev, test, staging, prod (or even more: dev-poc, ...)


Deployed in each environemnt, the running container needs to connect to a database within that environment (each env has a different database URL, login, password), and also make API calls to systems that are also associated 1:1 with that env (dev, test, staging, prod), and so each env has different API URL's that it needs to be aware of.

These are being set as ENV variables when the container runs, and then my code reads the ENV vars and sets them as variable in the executing code to be used.


Each built image needs to be unique according to the environment, but that is the problem. I only want to build one image for all environments.

The current way I handle this is to copy a file to the image in the Dockerfile

COPY config/dev.ini config/dev.ini

to build the image for the dev environment

and

COPY config/test.ini config/test.ini

to build the image for the test environment

Each .ini file has a [SECTION] and under there key=value pairs.

My code will then run reading each SECTION and key/value pairs as needed.

The problem is that I have to manually change which <env>.ini that is copied to the image in my Dockerfile in my local IDE before I do a 'docker build'


This would not be a problem if I could put conditional logic in the Dockerfile and build based on a single environment variable.


Each .ini file has about 100 params that each running container needs. I really only want one build script, and so using:

--build-arg KEY=value

There would be 100 params or --build-args for each environment (4), would not be an option.


At runtime, when the image is instantiated, using --env-file <path-to-env-file> is not an option because I do not execute the run command, it is deployed to a cloud service


Another thing I manually change in my project is:

$project_root/.env

The .env file has:

ENV=<environment

like

ENV=test

and so my code loads this .env file to set the environment variable based on what is set there

if ENV=test, the code will look for test.ini (and as you can see above, I copied config/test.ini from my local file system to the image config/test.ini

If the file that is copied from my local file system to the image does not match what is in .env, then the container will fail executing methods because it the config file will not be found.


I am ideally looking for a solution from someone that has actually done this or does this daily. That is have one Dockerfile that does not change between environments, but when the container is instantiated, it will act in a unique way within that environment because of ENV variables that drive it.


It may be that this is simply not possible with a Dockerfile.

I am considering using 'podman/buildah' so I can script the build and put in conditional logic in building the container.

This would allow reading the .env file, and based on what is set in ENV=<env> copy the right file into image file. The issue is that with a Dockerfile, I cannot do things programatically by reading a .env setting and getting the right file copied to the image - having to manually change the Dockerfile each time I build for a specific environment.

Upvotes: 1

Views: 234

Answers (0)

Related Questions