Reputation: 5471
I use docker for development and in production for laravel project. I have slightly different dockerfile for development and production. For example I am mounting local directory to docker container in development environment so that I don't need to do docker build for every change in code.
As mounted directory will only be available when running the docker container I can't put commands like "composer install" or "npm install" in dockerfile for development.
Currently I am managing two docker files, is there any way that I can do this with single docker file and decide which commands to run when doing docker build by sending parameters.
What I am trying to achieve is
In docker file
...
IF PROD THEN RUN composer install
...
During docker build
docker build [PROD] -t mytag .
Upvotes: 106
Views: 99283
Reputation: 2553
To use .env
, docker-compose.yml
and if
in Dockerfile, what works for me
.env
...
BUILD=dev
...
Dockerfile
ARG BUILD=""
...
RUN if [ "${BUILD}" = "dev" ]; then \
touch dev.php; \
else \
touch other.php; \
fi
# if copying files which are different per env
COPY ./config.${BUILD}.php config.php
docker-compose.yml
services:
laravel:
env_file:
- ./.env
container_name: laravel
build:
context: .
dockerfile: Dockerfile
args:
BUILD: ${BUILD}
...
docker compose up --force-recreate --build
To build without docker-compose.yml
locally
docker build -f Dockerfile --build-args BUILD=dev laravel .
docker run -it -d -v /mnt/data:/mnt/data -p 80:80 --env-file ./.env --net=host --name laravel laravel
Only .env
is different between environments, adding .env
to .gitignore
and .dockerignore
. It can be done differently too.
Upvotes: 0
Reputation: 51876
As a best practice you should try to aim to use one Dockerfile to avoid unexpected errors between different environments. However, you may have a use case where you cannot do that.
The Dockerfile syntax is not rich enough to support such a scenario, however you can use shell scripts to achieve that.
Create a shell script, called install.sh
that does something like:
if [ ${ENV} = "DEV" ]; then
composer install
else
npm install
fi
In your Dockerfile add this script and then execute it when building
...
COPY install.sh install.sh
RUN chmod u+x install.sh && ./install.sh
...
When building pass a build arg to specify the environment, example:
docker build --build-arg "ENV=PROD" ...
Upvotes: 98
Reputation: 381
If you run your containers with docker-compose
, you can also use multistage dockerfiles. At first you declare common part that both environments use and alias it like 'base' or something else. Then in the same dockerfile you declare new FROM
instruction that inherits base
and runs commands for the environment.
For example. I have a ruby image and I want to run
bundle install
in developmentbundle install --without development test
for production.So my dockerfile would be something like this:
FROM ruby:2.7.1-slim as base
CMD ['/bin/bash']
COPY Gemfile .
COPY Gemfile.lock .
FROM base as dev
RUN bundle install
FROM base as prod
RUN bundle install --without development test
In my dev docker-compose.yml
file I use this:
services:
web:
build:
target: dev
In docker-compose.prod.yml
this:
services:
web:
build:
target: prod
You can read more about multistage dockerfiles here.
Upvotes: 6
Reputation: 50
The best way to do it is with .env file in your project. You can define two variables CONTEXTDIRECTORY and DOCKERFILENAME And create Dockerfile-dev and Dockerfile-prod
This is example of using it:
docker compose file:
services:
serviceA:
build:
context: ${CONTEXTDIRECTORY:-./prod_context}
dockerfile: ${DOCKERFILENAME:-./nginx/Dockerfile-prod}
.env file in the root of project:
CONTEXTDIRECTORY=./
DOCKERFILENAME=Dockerfile-dev
Be careful with the context. Its path starts from the directory with the dockerfile that you specified, not from docker-compose directory.
In default values i using prod, because if you forget to specify env variables, you won't be able to accidentally build a dev version in production
Solution with diffrent dockerfiles is more convinient, then scripts. It's easier to change and maintain
Upvotes: 4
Reputation: 2518
UPDATE (2020): Since this was written 3 years ago, many things have changed (including my opinion about this topic). My suggested way of doing this, is using one dockerfile and using scripts. Please see @yamenk's answer.
ORIGINAL:
You can use two different Dockerfiles.
# ./Dockerfile (non production)
FROM foo/bar
MAINTAINER ...
# ....
And a second one:
# ./Dockerfile.production
FROM foo/bar
MAINTAINER ...
RUN composer install
While calling the build command, you can tell which file it should use:
$> docker build -t mytag .
$> docker build -t mytag-production -f Dockerfile.production .
Upvotes: 76
Reputation: 211
I have tried several approaches to this, including using docker-compose, a multi-stage build, passing an argument through a file and the approaches used in other answers. My company needed a good way to do this and after trying these, here is my opinion.
The best method is to pass the arg through the cmd. You can pass it through vscode while right clicking and choosing build image Image of visual studio code while clicking image build using this code:
ARG BuildMode
RUN echo $BuildMode
RUN if [ "$BuildMode" = "debug" ] ; then apt-get update \
&& apt-get install -y --no-install-recommends \
unzip \
&& rm -rf /var/lib/apt/lists/* \
&& curl -sSL https://aka.ms/getvsdbgsh | bash /dev/stdin -v latest -l /vsdbg ; fi
and in the build section of dockerfile:
ARG BuildMode
ENV Environment=${BuildMode:-debug}
RUN dotnet build "debugging.csproj" -c $Environment -o /app
FROM build AS publish
RUN dotnet publish "debugging.csproj" -c $Environment -o /app
Upvotes: 8
Reputation: 4250
You can use build args directly without providing additional sh script. Might look a little messy, though. But it works.
Dockerfile must be like this:
FROM alpine
ARG mode
RUN if [ "x$mode" = "xdev" ] ; then echo "Development" ; else echo "Production" ; fi
And commands to check are:
docker build -t app --build-arg mode=dev .
docker build -t app --build-arg mode=prod .
Upvotes: 12