Reputation: 836
I'm totally new to Docker so I appreciate your patience.
I'm looking for a way to deploy multiple containers with the same image, however I need to pass in a different config (file) to each?
Right now, my understanding is that once you build an image, that's what gets deployed, but the problem for me is that I don't see the point in building multiple images of the same application when it's only the config that is different between the containers.
If this is the norm, then I'll have to deal with it however if there's another way then please put me out of my misery! :)
Thanks!
Upvotes: 62
Views: 81989
Reputation: 6627
For Docker Desktop GUI and for those who prefer mouse clicks (like me, sometimes):
Repeat the same with different name and options. Now you have two containers running (see Containers tab)
Upvotes: 0
Reputation: 241
Let intro other way to create multi docker container with one docker image.
we can create docker container with docker-compose.yml
file in root project so after update the contents of docker-compose.yml
file include network config
, inbound and outbound ports configs
..., we can create docker container with this command docker-compose -f /var/www/..path/docker-compose.yml up -d
. In here -d is option to run docker container as background running.
here is more detail code
version: '3.8'
x-common:
database:
&db-environment
MYSQL_PASSWORD: &db-
List item
password 'CHANGE_ME' MYSQL_ROOT_PASSWORD: 'CHANGE_ME_TOO'
panel:
&panel-environment
APP_URL: 'https://example.com'
APP_TIMEZONE: 'UTC'
APP_SERVICE_AUTHOR: '[email protected]'
mail:
&mail-environment
MAIL_FROM: '[email protected]'
MAIL_DRIVER: 'smtp'
MAIL_HOST: 'mail'
MAIL_PORT: '1025'
MAIL_USERNAME: ''
MAIL_PASSWORD: ''
MAIL_ENCRYPTION: 'true'
services:
database:
image: mariadb:10.5
container_name: tttt-database
restart: always
command: --default-authentication-plugin=mysql_native_password
volumes:
- '/srv/pterodactyl/database:/var/lib/mysql'
environment:
<<: *db-environment
MYSQL_DATABASE: 'panel'
MYSQL_USER: 'pterodactyl'
networks:
- tttt-bridge-network
cache:
image: redis:alpine
container_name: tttt-cache
restart: always
networks:
- tttt-bridge-network
panel:
image: ghcr.io/image_url
container_name: tttt-panel
restart: always
ports:
- '3004:80' // dynamic ports
links:
- database
- cache
volumes:
- '/srv/pterodactyl/var/:/app/var/'
- '/srv/pterodactyl/nginx/:/etc/nginx/http.d/'
- '/srv/pterodactyl/certs/:/etc/letsencrypt/'
- '/srv/pterodactyl/logs/:/app/storage/logs'
environment:
<<: [*panel-environment, *mail-environment]
DB_PASSWORD: *db-password
APP_ENV: 'production'
APP_ENVIRONMENT_ONLY: 'false'
CACHE_DRIVER: 'redis'
SESSION_DRIVER: 'redis'
QUEUE_DRIVER: 'redis'
REDIS_HOST: 'cache'
DB_HOST: 'database'
DB_PORT: '3306'
networks:
- tttt-bridge-network
networks:
tttt-bridge-network: // dynamic networks
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16 //dynamic Address
Upvotes: 1
Reputation: 5601
Just run from the same image as many times as needed. New containers will be created and they can then be started and stoped each one saving its own configuration. For your convenience would be better to give each of your containers a name with "--name".
F.i:
docker run --name MyContainer1 <same image id>
docker run --name MyContainer2 <same image id>
docker run --name MyContainer3 <same image id>
That's it.
$ docker ps
CONTAINER ID IMAGE CREATED STATUS NAMES
a7e789711e62 67759a80360c 12 hours ago Up 2 minutes MyContainer1
87ae9c5c3f84 67759a80360c 12 hours ago Up About a minute MyContainer2
c1524520d864 67759a80360c 12 hours ago Up About a minute MyContainer3
After that you have your containers created forever and you can start and stop them like VMs.
docker start MyContainer1
Upvotes: 21
Reputation: 9916
I think looking at examples which are easy to understand could give you the best picture.
What you want to do is perfectly valid, an image should be anything you need to run, without the configuration.
To generate the configuration, you either:
use volumes and mount the file during container start docker run -v my.ini:/etc/mysql/my.ini percona
(and similar with docker-compose
).
Be aware, you can repeat this as often as you like, so mount several configs into your container (so the runtime-version of the image).
You will create those configs on the host before running the container and need to ship those files with the container, which is the downside of this approach (portability)
Most of the advanced docker images do provide a complex so called entry-point which consumes ENV variables you pass when starting the image, to create the configuration(s) for you, like https://github.com/docker-library/percona/blob/master/5.7/docker-entrypoint.sh
so when you run this image, you can do docker run -e MYSQL_DATABASE=myapp percona
and this will start percona and create the database percona for you.
This is all done by
Of course, you can do whatever you like with this. E.g this configures a general portus image: https://github.com/EugenMayer/docker-rancher-extra-catalogs/blob/master/templates/registry-slim/11/docker-compose.yml which has this entrypoint https://github.com/EugenMayer/docker-image-portus/blob/master/build/startup.sh
So you see, the entry-point strategy is very common and very powerful and i would suppose to go this route whenever you can.
Maybe for "completeness", the image-derive strategy, so you have you base image called "myapp" and for the installation X you create a new image
from myapp
COPY my.ini /etc/mysql/my.ini
COPY application.yml /var/app/config/application.yml
And call this image myapp:x - the obvious issue with this is, you end up having a lot of images, on the other side, compared to a) its much more portable.
Hope that helps
Upvotes: 30
Reputation: 263726
Each container runs with the same RO image but with a RW container specific filesystem layer. The result is each container can have it's own files that are distinct from every other container.
You can pass in configuration on the CLI, as an environment variable, or as a unique volume mount. It's a very standard use case for Docker.
Upvotes: 7