Reputation: 20838
I have been reading up and learning about Docker, and am trying to correctly choose the Django setup to use. So far there is either:
I understand that Dockerfiles are used in Docker Compose, but I am not sure if it is good practice to put everything in one large Dockerfile with multiple FROM
commands for the different images?
I want to use several different images that include:
uwsgi
nginx
postgres
redis
rabbitmq
celery with cron
Please advise on what are best practices in setting up this type of environment using Docker.
If it helps, I am on a Mac, so using boot2docker.
Some issues I've had:
Upvotes: 669
Views: 346577
Reputation: 15453
docker-compose exists to keep you from having to write a ton of commands you would have to with docker-cli.
docker-compose also makes it easy to startup multiple containers at the same time and automatically connect them together with some form of networking.
The purpose of docker-compose is to function as docker cli but to issue multiple commands much more quickly.
To make use of docker-compose, you need to encode the commands you were running before into a docker-compose.yml
file.
You are not just going to copy paste them into the YAML file, there is a special syntax.
Once created, you have to feed it to the docker-compose cli and it will be up to the cli to parse the file and create all the different containers with the correct configuration we specify.
So you will have separate containers, let's say, one is redis-server
and the second one is node-app
, and you want that created using the Dockerfile
in your current directory.
Additionally, after making that container, you would map some port from the container to the local machine to access everything running inside of it.
So for your docker-compose.yml
file, you would want to start the first line like so:
version: '3'
That tells Docker the version of docker-compose
you want to use. After that, you have to add:
version: '3'
services:
redis-server:
image: 'redis'
node-app:
build: .
Please notice the indentation, very important. Also, notice for one service I am grabbing an image, but for another service I am telling docker-compose
to look inside the current directory to build the image that will be used for the second container.
Then you want to specify all the different ports that you want open on this container.
version: '3'
services:
redis-server:
image: 'redis'
node-app:
build: .
ports:
-
Please notice the dash (a dash in a YAML file is how we specify an array). In this example, I am mapping 8081
on my local machine to 8081
on the container like so:
version: '3'
services:
redis-server:
image: 'redis'
node-app:
build: .
ports:
- "8081:8081"
So the first port is your local machine, and the other is the port on the container, you could also distinguish between the two to avoid confusion like so:
version: '3'
services:
redis-server:
image: 'redis'
node-app:
build: .
ports:
- "4001:8081"
By developing your docker-compose.yml
file like this, it will create these containers on essentially the same network and they will have free access to communicate with each other any way they please and exchange as much information as they want.
When the two containers are created using docker-compose
, we do not need any port declarations.
Now in my example, we need to do some code configuration in the Node.js app that looks something like this:
const express = require('express');
const redis = require('redis');
const app = express();
const client = redis.createClient({
host: 'redis-server'
});
I use this example above to make you aware that there may be some specific configuration you would have to do in addition to the docker-compose.yml
file that may be specific to your project.
Now, if you ever find yourself working with a Node.js app and redis, you want to ensure you are aware of the default port Node.js uses, so I will add this:
const express = require('express');
const redis = require('redis');
const app = express();
const client = redis.createClient({
host: 'redis-server',
port: 6379
});
So Docker is going to see that the Node.js app is looking for redis-server
and redirect that connection over to this running container.
The whole time, the Dockerfile
only contains this:
FROM node:alpine
WORKDIR '/app'
COPY /package.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
So, whereas before you would have to run docker run myimage
to create an instance of all the containers or services inside the file, you can instead run docker-compose up
and you don't have to specify an image because Docker will look in the current working directory and look for a docker-compose.yml
file inside.
Before docker-compose.yml
, we had to deal with two separate commands of docker build .
and docker run myimage
, but in the docker-compose
world, if you want to rebuild your images, you write docker-compose up --build
. That tells Docker to start up the containers again but rebuild it to get the latest changes.
So docker-compose
makes it easier for working with multiple containers. The next time you need to start this group of containers in the background, you can do docker-compose up -d
; and to stop them, you can do docker-compose down
.
Upvotes: 259
Reputation: 60832
When I first started learning Docker, the following example is what helped me understand it.
Suppose you have a web-app + database
config which is a pretty common setup.
The web-app
part is usually built with a Dockerfile. Because you will need some framework (like Python/Django, or NodeJS, or DotNet, etc) that actually runs your code, installs dependencies etc. So the framework + web-app
lives in one container, tightly coupled, together, otherwise it won't work. That's the Dockerfile's
job.
However the database
part usually lives in a separate container, because the database is decoupled from the app. It runs on its own, it does not need NodeJS or anything, it is backed-up/restored separately from your web-app
, the database server can be upgraded/downgraded separately etc... That is why you usually create that container separately. You take a database-engine public image (like Postgres, or SQL Server, MySQL, Mongo etc), open some ports, mount some volumes to persist the data, etc. etc.
And in the end you connect the two parts together using docker-compose
, which, sort of, gives you a container with two "subcontainers" in it.
Upvotes: 3
Reputation: 1725
Docker compose file is a way for you to declaratively orchestrate the startup of multiple containers, rather than run each Dockerfile separately with a bash script, which would be much slower to write and harder to debug.
Upvotes: 132
Reputation: 3113
Dockerfiles are to build an image for example from a bare bone Ubuntu, you can add mysql
called mySQL
on one image and mywordpress
on a second image called mywordpress
.
Compose YAML files are to take these images and run them cohesively.
For example, if you have in your docker-compose.yml
file a service called db
:
services:
db:
image: mySQL --- image that you built.
and a service called wordpress such as:
wordpress:
image: mywordpress
then inside the mywordpress container you can use db
to connect to your mySQL container. This magic is possible because your docker host create a network bridge (network overlay).
Upvotes: 10
Reputation: 1347
"better" is relative. It all depends on what your needs are. Docker compose is for orchestrating multiple containers. If these images already exist in the docker registry, then it's better to list them in the compose file. If these images or some other images have to be built from files on your computer, then you can describe the processes of building those images in a Dockerfile.
I understand that Dockerfiles are used in Docker Compose, but I am not sure if it is good practice to put everything in one large Dockerfile with multiple FROM commands for the different images?
Using multiple FROM in a single dockerfile is not a very good idea because there is a proposal to remove the feature. 13026
If for instance, you want to dockerize an application which uses a database and have the application files on your computer, you can use a compose file together with a dockerfile as follows
mysql:
image: mysql:5.7
volumes:
- ./db-data:/var/lib/mysql
environment:
- "MYSQL_ROOT_PASSWORD=secret"
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
ports:
- "3307:3306"
app:
build:
context: ./path/to/Dockerfile
dockerfile: Dockerfile
volumes:
- ./:/app
working_dir: /app
FROM php:7.1-fpm
RUN apt-get update && apt-get install -y libmcrypt-dev \
mysql-client libmagickwand-dev --no-install-recommends \
&& pecl install imagick \
&& docker-php-ext-enable imagick \
&& docker-php-ext-install pdo_mysql \
&& curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
Upvotes: 59
Reputation: 58760
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image.
Example, Dockerfile
FROM ubuntu:latest
MAINTAINER john doe
RUN apt-get update
RUN apt-get install -y python python-pip wget
RUN pip install Flask
ADD hello.py /home/hello.py
WORKDIR /home
Docker Compose
is a tool for defining and running multi-container Docker applications.
define the services that make up your app in docker-compose.yml
so they can be run together in an isolated environment.
get an app running in one command by just running docker-compose up
Example, docker-compose.yml
version: "3"
services:
web:
build: .
ports:
- '5000:5000'
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}
Upvotes: 802
Reputation: 562
Dockerfile is a file that contains text commands to assemble an image.
Docker compose is used to run a multi-container environment.
In your specific scenario, if you have multiple services for each technology you mentioned (service 1 using reddis, service 2 using rabbit mq etc), then you can have a Dockerfile for each of the services and a common docker-compose.yml to run all the "Dockerfile" as containers.
If you want them all in a single service, docker-compose will be a viable option.
Upvotes: 4
Reputation: 2842
In Microservices world (having a common shared codebase), each Microservice would have a Dockerfile
whereas at the root level (generally outside of all Microservices and where your parent POM resides) you would define a docker-compose.yml
to group all Microservices into a full-blown app.
In your case "Docker Compose" is preferred over "Dockerfile". Think "App" Think "Compose".
Upvotes: 4
Reputation: 5139
Dockerfile and Docker Compose are two different concepts in Dockerland. When we talk about Docker, the first things that come to mind are orchestration, OS level virtualization, images, containers, etc.. I will try to explain each as follows:
Image: An image is an immutable, shareable file that is stored in a Docker-trusted registry. A Docker image is built up from a series of read-only layers. Each layer represents an instruction that is being given in the image’s Dockerfile. An image holds all the required binaries to run.
Container: An instance of an image is called a container. A container is just an executable image binary that is to be run by the host OS. A running image is a container.
Dockerfile:
A Dockerfile is a text document that contains all of the commands / build instructions, a user could call on the command line to assemble an image. This will be saved as a Dockerfile
. (Note the lowercase 'f'.)
Docker-Compose:
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services (containers). Then, with a single command, you create and start all the services from your configuration.
The Compose file would be saved as docker-compose.yml
.
Upvotes: 67
Reputation: 3915
Imagine you are the manager of a software company and you just bought a brand new server. Just the hardware.
Think of Dockerfile
as a set of instructions you would tell your system adminstrator what to install on this brand new server. For example:
/var/www
)By contrast, think of docker-compose.yml
as a set of instructions you would tell your system administrator how the server can interact with the rest of the world. For example,
(This is not a precise explanation but good enough to start with.)
Upvotes: 23
Reputation: 6487
In my workflow, I add a Dockerfile for each part of my system and configure it that each part could run individually. Then I add a docker-compose.yml to bring them together and link them.
Biggest advantage (in my opinion): when linking the containers, you can define a name and ping your containers with this name. Therefore your database might be accessible with the name db
and no longer by its IP.
Upvotes: 54
Reputation: 4577
The answer is neither.
Docker Compose (herein referred to as compose) will use the Dockerfile if you add the build command to your project's docker-compose.yml
.
Your Docker workflow should be to build a suitable Dockerfile
for each image you wish to create, then use compose to assemble the images using the build
command.
You can specify the path to your individual Dockerfiles using build /path/to/dockerfiles/blah
where /path/to/dockerfiles/blah
is where blah's Dockerfile
lives.
Upvotes: 347