Reputation: 693
I have created the following docker-compose.yml file to build docker containers for a Django application:
version: "2.4"
services:
db:
image: postgres:11
env_file:
- .env_prod_db
volumes:
- db:/var/lib/postgresql/data/
networks:
- net
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
web:
build:
context: .
dockerfile: Dockerfile
env_file:
- .env_prod_web
command: gunicorn roster_project.wsgi:application --disable-redirect-access-to-syslog --error-logfile '-' --access-logfile '-' --access-logformat '%(t)s [GUNICORN] %(h)s %(l)s %(u)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"' --workers 3 --bind 0.0.0.0:8000
volumes:
- static:/roster/webserver/static/
networks:
- net
expose:
- 8000
depends_on:
- db
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/roster/webserver/static/
networks:
- net
depends_on:
- web
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
networks:
net:
enable_ipv6: true
driver: bridge
ipam:
driver: default
config:
- subnet: fd02::/64
gateway: fd02::1
volumes:
db:
static:
Potential users of my app could use this file to deploy my app if they first download all the source code from Github. However, I would like them to be able to deploy the app just by using docker-compose to download docker images that I have stored in my Docker Hub repo.
If I upload my docker images to a Docker Hub repo, do I need to create an additional docker-compose.yml that refers to the repo images so that others can deploy my app on their own docker hosts ? Or can I somehow combine the build and deploy requirements into a single docker-compose.yml file ?
Upvotes: 1
Views: 2160
Reputation: 160051
You can use multiple Compose files when you run docker-compose
commands. The easiest way to do this is to have a main docker-compose.yml
file that lists out the standard (usually production-oriented) settings, and a docker-compose.override.yml
file that overrides its settings.
For example, the base docker-compose.yml
file could look like:
version: "2.4"
services:
db:
image: postgres:11
volumes:
- db:/var/lib/postgresql/data/
web:
image: me/web
depends_on:
- db
nginx:
image: me/nginx
ports:
- 80:80
depends_on:
- web
volumes:
db:
Note that I've removed all of the deployment-specific setup (logging configuration, manual IP overrides, environment files); I'm using the Compose-provided default
network; I avoid overwriting the static assets with old content from a Docker volume; and I provide an image:
name even for things I build locally.
The differences between this and the "real" production settings can be put in a separate docker-compose.production.yml
file:
version: "2.4"
services:
db:
# Note, no image: or other similar settings here;
# they come from the base docker-compose.yml
env_file:
- .env_prod_db
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
web: # similarly
db: # similarly
networks:
default:
enable_ipv6: true
# and other settings as necessary
For development, on the other hand, you need to supply the build:
information. docker-compose.development.yml
can contain:
version: "2.4"
services:
web:
build: .
nginx:
build: ./nginx
Then you can use a symbolic link to make one of these the current override file
ln -sf docker-compose.production.yml docker-compose.override.yml
A downstream deployer will need the base docker-compose.yml
, that mentions the Docker Hub image. They can use your production values if it makes sense for them, or they can use a different setup. They shouldn't need the rest of the application source code or Dockerfiles (though it's probably all in the same GitHub repository).
Upvotes: 4