Reputation: 816
In my previous company, we adopted a micro-service architecture and used Docker to implement it. The average size of our Docker images were ~300MB - ~600MB. However my new company is using Docker mostly for development workflow, and the average image size is ~1.5GB - ~3GB. Some of the larger images (10GB+) are being actively refactored to reduce the image size.
From everything I have read, I feel that these images are too large and we will run into issues down the line, but the rest of the team feels that Docker Engine and Docker Swarm should handle those image sizes without problems.
My question: Is there an accepted ideal range for Docker images, and what pitfalls (if any) will I face trying to use a workflow with GB images?
Upvotes: 18
Views: 35561
Reputation: 46
We have used 1 GB -3 GB but more than 5GB is considered too large .
There are ways to reduce the size as using multi stage layer
Reference Documentation:
https://devopscube.com/reduce-docker-image-size/
Upvotes: 1
Reputation: 1396
Docker itself can handle them no problem, I can't say anything about swarm. "How big is too big" though is something only your team can answer. If the image is 5GB and 90% of it is important to the application, I wouldn't say that it's bloated. If the image is only 300M but only 10% of it is required by the application, I'd say that it's bloated.
FWIW, depending on just how "new" your "new company" is, it's probably best if you don't rock the boat.
Upvotes: 6
Reputation: 704
In my opinion, ideal size is only ideal for your exact case. For me and my current company, we have no image bigger than 1GB.
If you use an image of 10GB size and have no problems (is it even possible?!), then it is ok for your case.
As example of a problem case, you could consider a question such as: "Is it ok that I am waiting 1-2 hours while my image is deploying over internet to the remote server/dev machine?" In all likelihood, this is not ok. On the another hand, if you are not facing such a problem, then you have no problems at all.
Another problem is while small images start up for a couple of seconds, the huge one starts up for minutes. It also can break a "hot deploy" scheme if you use it.
It also could be appropriate to check why your image is so big. You can read how layers work.
Consider the following two Dockerfiles:
First:
RUN download something huge that weighs 5GB
RUN remove that something huge from above
Second:
RUN download something huge that weighs 5GB &&\
remove that something huge from above
The image built from the second Dockerfile weighs 5GB less than that from the first, while they are the same inside.
Another trick is to use a small, basic image from the beginning. Just compare these differences:
IMAGE NAME SIZE
busybox 1 MB
alpine 3 MB
debian 125 MB
ubuntu 188 MB
While debian and ubuntu are almost the same inside, debian will save you 50MB from the start, and will need fewer dependencies in future.
Upvotes: 20