Reputation: 11575
I have a deployment setup with Docker that works as follows:
I'd like to do these steps as quickly as possible, but they take an incredibly long time. Even for an image of modest size (750MiB, including the standard ubuntu
and friends), after a small modification, it takes 17 minutes to deploy. I optimized the order of items in my Dockerfile
, so it actually hits the cached images most of the time. This doesn't seem to make a difference.
The main culprit is the docker push
step. For both Docker Hub and Quay.io, it takes an unrealistically long time to push images. In one simple benchmark I did, I executed docker push
twice back to back, so all the previous images are already on the registry. So I only see these lines:
...
bf84c1d841244f: Image already pushed, skipping
...
But if I time the push, the performance is horrendous. Pushing to Quay.io takes 3.5 minutes when all the images are already on the server! Pushing to Docker Hub takes about 12 minutes!
There is clearly something wrong somewhere, since many people are using Docker in production, these times are exactly the opposite of continuous delivery.
How can I make this run quicker? Do others also see this kind of performance? Does it have to do with the registry services, or somehow related to my local machine?
I am using Docker under Mac OS X.
Upvotes: 15
Views: 21050
Reputation: 143
Just a note: I run my own docker registry which is local to the machine I am issuing the "docker push" command on and it still takes an inordinate amount of time. It is definitely not an I/O rate issue from the disks as they are backed by SSDs (and to clarify, they are performant with ~500+MB/sec from anything else that uses them). However, the docker push command seems to take just as long as if I were sending it to a remote site. I think there is something beyond "bandwidth" issues going on. My suspicion is that regardless of the fact that my registry is local, it is still attempting to use the NIC to transfer data (which seems to make sense due to requiring a URI as the push destination and the registry being a container itself).
That being said, I can copy the same file(s) to where they will ultimately reside on the local registry orders of magnitude faster than the push command. Perhaps the solution would be just that. However, the one thing that is clear is that the problem alone is not one of bandwidth per se, but likely data path in general.
At any rate, running a local registry will not likely (totally) solve the OP's issue. While I just started to investigate, I suspect there needs to be a code change to docker in order to resolve this issue. I don't think it is a bug, but rather a design challenge. URIs and/or host<->host communications require network stacks, even when the source and destination are the same machine/host/container.
Upvotes: 7
Reputation: 28106
Is it was said in the previews answer, you should possibly use your local registry. It is not very hard to install and use it, here you can find the information, how you can start with it. It could be much faster, because you are not limited with upload speed limits from your provider. By the way, you can always push the image from local registry into Docker Hub or other local registry (for example, installed in your customers network).
One more thing, I could suggest, in terms of continuous integration and delivery, is to use some continuous integration server, which could automatically build your images on Linux OS, where you don't need to use boot2docker or docker-machine. For test and development purposes, you could build your images locally, without making pushes to the remote registry.
Upvotes: 1
Reputation: 46500
For this reason, organizations typically run their own registries on the local network. This also keeps organizations in control of their own data and avoids relying on an external service.
You will also find that cloud hosts such as Google Container Engine and the Amazon Container Service offer hosted registries to provide users with fast, local downloads.
Upvotes: 0