Reputation: 19
For deploying a large number of containers (say 25) on a single host, would it be better to have a large custom base image with all libraries needed by every application, or develop near custom images for each container with only necessary libraries for each?
The Docker design pattern is to remain light (i.e. only what is required), but it has also been argued that if many containers use the same base image then "resources" can be shared.
A previous question on Resource Sharing and most of the Docker info says containers don't share anything. This previous question on Multiple Base Images leads one to believe that, regardless of one large image or many custom images, any overlapping programs will be shared.. in which case the large base image may have slightly higher overhead but would be less development work because you can naively throw everything together (and even though it's huge, disk space is bountiful)
Technically speaking, what are the pros and cons of implementing a large base image versus small custom images? How do containers from the same base image "share resources"?
Upvotes: 0
Views: 1368
Reputation: 1224
use different images. @Thomasleveil is right and also it saves time for when you have a software upgrade because you only have to rebuild a smaller image and restart a smaller number of containers.
Although, there are situations when larger images are preferred. for example, if two small programs need jdk to run and jdk is a very large program, it probably makes sense build a large image with all three programs installed. but you shouldn't start two different containers to run the programs, you should be running these programs inside the same container. You can use supervisor inside the container to manage the multiple small programs.
Upvotes: 0
Reputation: 103965
Docker images eventually share common layers of their file system. Remember, Docker uses a layered filesystem for images and containers.
Docker containers filesystem are based on top of a Docker image layer, but changes made in one container do not affects file systems of their Docker image or other containers.
The only thing a Docker containers share is the kernel which is the one of the Docker host.
You can eventually also share mount points in Docker containers using Docker data volumes.
For deploying a large number of containers (say 25) on a single host, would it be better to have a large custom base image with all libraries needed by every application, or develop near custom images for each container with only necessary libraries for each?
It depends on what criteria you have for better. If you are after speed, then one big fat base image would make things quicker as it would be pulled only once and all other docker images based on this one would only have to pull the additional layers.
You would also assume that most changes would be done in child images and the big fat base image would only need to be updated occasionally.
Upvotes: 2