Reputation: 60093
How do I transfer a Docker image from one machine to another one without using a repository, no matter private or public?
I create my own image in VirtualBox, and when it is finished I try to deploy to other machines to have real usage.
Since it is based on my own based image (like Red Hat Linux), it cannot be recreated from a Dockerfile. My dockerfile isn't easily portable.
Are there simple commands I can use? Or another solution?
Upvotes: 2395
Views: 1475527
Reputation: 2787
First, you're going to want to read the Docker Image Specification v1.0.0.
It's surprising to me that nobody has linked-to the actual specification that describes what is a docker image and how it should be laid-out on-disk. To quote from the above-linked spec:
For example, here's what the full archive of
library/busybox
is (displayed intree
format):.`\ ├── 5785b62b697b99a5af6cd5d0aabc804d5748abbb6d3d07da5d1d3795f2dcc83e`\ │ ├── VERSION`\ │ ├── json`\ │ └── layer.tar`\ ├── a7b8b41220991bfc754d7ad445ad27b7f272ab8b4a2c175b9512b97471d02a8a`\ │ ├── VERSION`\ │ ├── json`\ │ └── layer.tar`\ ├── a936027c5ca8bf8f517923169a233e391cbb38469a75de8383b5228dc2d26ceb`\ │ ├── VERSION`\ │ ├── json`\ │ └── layer.tar`\ ├── f60c56784b832dd990022afc120b8136ab3da9528094752ae13fe63a2d28dc8c`\ │ ├── VERSION`\ │ ├── json`\ │ └── layer.tar`\ └── repositories`\
There are one or more directories named with the ID for each layer in a full image. Each of these directories contains 3 files:
- `VERSION` - The schema version of the `json` file
- `json` - The JSON metadata for an image layer
- `layer.tar` - The Tar archive of the filesystem changeset for an image
layer.The content of the
VERSION
files is simply the semantic version of the JSON metadata schema:
1.0
\And the
repositories
file is another JSON file which describes names/tags:{ "busybox":{ "latest":"5785b62b697b99a5af6cd5d0aabc804d5748abbb6d3d07da5d1d3795f2dcc83e" } }
Every key in this object is the name of a repository, and maps to a collection of tag suffixes. Each tag maps to the ID of the image represented by that tag.
So, as shown in the quote from the spec above, what we need to do is to create a set of directories -- one for each layer -- named after the layer's ID. Each of these layer-specific directories must contain 3 files:
layer.tar
'),json
' -- with no file extension)VERSION
' whose contents is literally just '1.0
'There are many tools that you can use to spit-out a tar with the above format, such as docker save
. But that's the format you need to get.
Once you have your docker image stored on-disk in a format matching the above specification, you can transfer it to another machine (eg rsync) and use docker image load
to import the image.
For more information, see:
Upvotes: 0
Reputation: 4933
Run
docker images
to see a list of the images on the host. Let's say you have an image called awesomesauce. In your terminal, cd
to the directory where you want to export the image to. Now run:
docker save awesomesauce:latest > awesomesauce.tar
Copy the tar file to a thumb drive or whatever, and then copy it to the new host computer.
Now from the new host do:
docker load < awesomesauce.tar
Upvotes: 70
Reputation: 1204
For those guys who use windows WSL and docker desktop I recommend a very simple solution.
On your Host machine :
1- Stop Docker
2- in command prompt type :
wsl --shutdown
wsl --export docker-desktop-data E:\docker-desktop\docker-desktop-data.tar
now you can copy
docker-desktop-data.tar
to your external storage (for example external HDD) then copy the file in the destination machine and then :
On your destination machine :
1- Stop Docker
2- in command prompt type :
wsl --shutdown
wsl --unregister docker-desktop-data
wsl --import docker-desktop-data E:\docker-desktop\data E:\docker-desktop\docker-desktop-data.tar --version 2
In this step, we may meet the error of cannot create a specific network. Just re-run the import command.
Now Start Docker
Hope it Helps ;) Don't forget to vote up.
Upvotes: 1
Reputation: 43606
You will need to save the Docker image as a tar file:
docker save -o <path for generated tar file> <image name>
Then copy your image to a new system with regular file transfer tools such as cp
, scp
, or rsync
(preferred for big files). After that you will have to load the image into Docker:
docker load -i <path to image tar file>
You should add filename (not just directory) with -o, for example:
docker save -o c:/myfile.tar centos:16
your image syntax may need the repository prefix (:latest tag is default)
docker save -o C:\path\to\file.tar repository/imagename
PS: You may need to sudo
all commands.
Upvotes: 3849
Reputation: 35453
Transferring a Docker image via SSH, bzipping the content on the fly:
docker save <image> | bzip2 | ssh user@host docker load
Note that docker load
automatically decompresses images for you. It supports gzip
, bzip2
and xz
.
It's also a good idea to put pv
in the middle of the pipe to see how the transfer is going:
docker save <image> | bzip2 | pv | ssh user@host docker load
(More info about pv
: home page, man page).
Use gzip
/gunzip
when your network is fast and you can upload at 10 Mb/s and more -- because bzip
won't be able to compress fast enough, gzip
will be much faster (Thanks @Thomas Steinbach)
Use xz
if you're on really slow network (e.g. mobile internet). xz
offers a higher compression ratio (Thanks @jgmjgm)
Upvotes: 946
Reputation: 1266
The best way to save all the images is like this :
docker save $(docker images --format '{{.Repository}}:{{.Tag}}') -o allimages.tar
Above code will save all the images in allimages.tar
and to load the images go to the directory where you saved the images and run this command :
docker load -i allimages.tar
Just make sure to use this commands in PowerShell and not in Commad Prompt
Upvotes: 18
Reputation: 2257
The fastest way to save and load docker image through gzip command:
docker save <image_id> | gzip > image_file.tgz
To load your zipped image on another server use immediate this command, it will be recognized as zipped image:
docker load -i image_file.tgz
to rename, or re-tag the image use:
docker image tag <image_id> <image_path_name>:<version>
for example:
docker image tag 4444444 your_docker_or_harbor_path/ubuntu:14.0
Upvotes: 42
Reputation: 354
Based on the @kolypto 's answer, this worked great for me but only with sudo
for docker load:
docker save <image> | bzip2 | pv | ssh user@host sudo docker load
or if you don't have / don't want to install the pv:
docker save <image> | bzip2 | ssh user@host sudo docker load
No need to manually zip or similar.
Upvotes: 5
Reputation: 8233
If you are working on a Windows machine and uploading to a linux machine commands such as
docker save <image> | ssh user@host docker load
will not work if you are using powershell as it seems that it adds an additional character to the output. If you run the command using cmd (Command Prompt) it will however work. As a side note you can also install gzip using Chocolatey and the following will also work from cmd.
docker save <image> | gzip | ssh user@host docker load
Upvotes: 6
Reputation: 834
REAL WORLD EXAMPLE
#host1
systemctl stop docker
systemctl start docker
docker commit -p 1d09068ef111 ubuntu001_bkp3
#create backup
docker save -o ubuntu001_bkp3.tar ubuntu001_bkp3
#upload ubuntu001_bkp3.tar to my online drive
aws s3 cp ubuntu001_bkp3.tar s3://mybucket001/
#host2
systemctl stop docker
systemctl start docker
cd /dir1
#download ubuntu001_bkp3.tar from my online drive
aws s3 cp s3://mybucket001/ubuntu001_bkp3.tar /dir1
#restore backup
cat ./ubuntu001_bkp3.tar | docker load
docker run --name ubuntu001 -it ubuntu001_bkp3:latest bash
docker ps -a
docker attach ubuntu001
Upvotes: 14
Reputation: 3321
1. Pull an image or a repository from a registry.
docker pull [OPTIONS] NAME[:TAG|@DIGEST]
2. Save it as a .tar file.
docker save [OPTIONS] IMAGE [IMAGE...]
For example:
docker pull hello-world
docker save -o hello-world.tar hello-world
Upvotes: 4
Reputation: 11065
First save the Docker image to a compressed archive:
docker save <docker image name> | gzip > <docker image name>.tar.gz
Then load the exported image to Docker using the below command:
zcat <docker image name>.tar.gz | docker load
Upvotes: 78
Reputation: 6809
To save an image to any file path or shared NFS place see the following example.
Get the image id by doing:
docker images
Say you have an image with id "matrix-data".
Save the image with id:
docker save -o /home/matrix/matrix-data.tar matrix-data
Copy the image from the path to any host. Now import to your local Docker installation using:
docker load -i <path to copied image file>
Upvotes: 175
Reputation: 4983
You can use a one-liner with DOCKER_HOST
variable:
docker save app:1.0 | gzip | DOCKER_HOST=ssh://user@remotehost docker load
Upvotes: 109
Reputation: 438
Script to perform Docker save and load function (tried and tested):
Docker Save:
#!/bin/bash
#files will be saved in the dir 'Docker_images'
mkdir Docker_images
cd Docker_images
directory=`pwd`
c=0
#save the image names in 'list.txt'
doc= docker images | awk '{print $1}' > list.txt
printf "START \n"
input="$directory/list.txt"
#Check and create the image tar for the docker images
while IFS= read -r line
do
one=`echo $line | awk '{print $1}'`
two=`echo $line | awk '{print $1}' | cut -c 1-3`
if [ "$one" != "<none>" ]; then
c=$((c+1))
printf "\n $one \n $two \n"
docker save -o $two$c'.tar' $one
printf "Docker image number $c successfully converted: $two$c \n \n"
fi
done < "$input"
Docker Load:
#!/bin/bash
cd Docker_images/
directory=`pwd`
ls | grep tar > files.txt
c=0
printf "START \n"
input="$directory/files.txt"
while IFS= read -r line
do
c=$((c+1))
printf "$c) $line \n"
docker load -i $line
printf "$c) Successfully created the Docker image $line \n \n"
done < "$input"
Upvotes: 2
Reputation: 3706
docker-push-ssh
is a command line utility I created just for this scenario.
It sets up a temporary private Docker registry on the server, establishes an SSH tunnel from your localhost, pushes your image, then cleans up after itself.
The benefit of this approach over docker save
(at the time of writing most answers are using this method) is that only the new layers are pushed to the server, resulting in a MUCH quicker upload.
Oftentimes using an intermediate registry like dockerhub is undesirable, and cumbersome.
https://github.com/brthor/docker-push-ssh
Install:
pip install docker-push-ssh
Example:
docker-push-ssh -i ~/my_ssh_key username@myserver.com my-docker-image
The biggest caveat is that you have to manually add your localhost to Docker's insecure_registries
configuration. Run the tool once and it will give you an informative error:
Error Pushing Image: Ensure localhost:5000 is added to your insecure registries.
More Details (OS X): https://stackoverflow.com/questions/32808215/where-to-set-the-insecure-registry-flag-on-mac-os
Where should I set the '--insecure-registry' flag on Mac OS?
Upvotes: 26
Reputation: 422
When using docker-machine, you can copy images between machines mach1
and mach2
with:
docker $(docker-machine config <mach1>) save <image> | docker $(docker-machine config <mach2>) load
And of course you can also stick pv
in the middle to get a progess indicator:
docker $(docker-machine config <mach1>) save <image> | pv | docker $(docker-machine config <mach2>) load
You may also omit one of the docker-machine config
sub-shells, to use your current default docker-host.
docker save <image> | docker $(docker-machine config <mach>) load
to copy image from current docker-host to mach
or
docker $(docker-machine config <mach>) save <image> | docker load
to copy from mach
to current docker-host.
Upvotes: 18
Reputation: 10006
To transfer images from your local Docker installation to a minikube VM:
docker save <image> | (eval $(minikube docker-env) && docker load)
Upvotes: 12
Reputation: 434
All other answers are very helpful. I just went through the same problem and figure out an easy way with docker machine scp
.
Since Docker Machine v0.3.0, scp was introduced to copy files from one Docker machine to another. This is very convenient if you want copying a file from your local computer to a remote Docker machine such as AWS EC2 or Digital Ocean because Docker Machine is taking care of SSH credentials for you.
Save you images using docker save
like:
docker save -o docker-images.tar app-web
Copy images using docker-machine scp
docker-machine scp ./docker-images.tar remote-machine:/home/ubuntu
Assume your remote Docker machine is remote-machine
and the directory you want the tar file to be is /home/ubuntu
.
Load the Docker image
docker-machine ssh remote-machine sudo docker load -i docker-images.tar
Upvotes: 9
Reputation: 17418
I assume you need to save couchdb-cartridge
which has an image id of 7ebc8510bc2c:
stratos@Dev-PC:~$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
couchdb-cartridge latest 7ebc8510bc2c 17 hours ago 1.102 GB
192.168.57.30:5042/couchdb-cartridge latest 7ebc8510bc2c 17 hours ago 1.102 GB
ubuntu 14.04 53bf7a53e890 3 days ago 221.3 MB
Save the archiveName image to a tar file. I will use the /media/sf_docker_vm/
to save the image.
stratos@Dev-PC:~$ docker save imageID > /media/sf_docker_vm/archiveName.tar
Copy the archiveName.tar file to your new Docker instance using whatever method works in your environment, for example FTP
, SCP
, etc.
Run the docker load
command on your new Docker instance and specify the location of the image tar file.
stratos@Dev-PC:~$ docker load < /media/sf_docker_vm/archiveName.tar
Finally, run the docker images
command to check that the image is now available.
stratos@Dev-PC:~$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
couchdb-cartridge latest 7ebc8510bc2c 17 hours ago 1.102 GB
192.168.57.30:5042/couchdb-cartridge latest bc8510bc2c 17 hours ago 1.102 GB
ubuntu 14.04 4d2eab1c0b9a 3 days ago 221.3 MB
Please find this detailed post.
Upvotes: 16
Reputation: 635
I want to move all images with tags.
```
OUT=$(docker images --format '{{.Repository}}:{{.Tag}}')
OUTPUT=($OUT)
docker save $(echo "${OUTPUT[*]}") -o /dir/images.tar
```
Explanation:
First OUT
gets all tags but separated with new lines. Second OUTPUT
gets all tags in an array. Third $(echo "${OUTPUT[*]}")
puts all tags for a single docker save
command so that all images are in a single tar.
Additionally, this can be zipped using gzip. On target, run:
tar xvf images.tar.gz -O | docker load
-O
option to tar
puts contents on stdin which can be grabbed by docker load
.
Upvotes: 4
Reputation: 478
You may use sshfs
:
$ sshfs user@ip:/<remote-path> <local-mount-path>
$ docker save <image-id> > <local-mount-path>/myImage.tar
Upvotes: 3
Reputation: 1013
For a flattened export of a container's filesystem, use;
docker export CONTAINER_ID > my_container.tar
Use cat my_container.tar | docker import -
to import said image.
Upvotes: 30