Vincent
Vincent

Reputation: 5425

"no space left on device" even after removing all containers

While experimenting with Docker and Docker Compose I suddenly ran into "no space left on device" errors. I've tried to remove everything using methods suggested in similar questions, but to no avail.

Things I ran:

$ docker-compose rm -v

$ docker volume rm $(docker volume ls -qf dangling=true)

$ docker rmi $(docker images | grep "^<none>" | awk "{print $3}")

$ docker system prune

$ docker container prune

$ docker rm $(docker stop -t=1 $(docker ps -q))

$ docker rmi -f $(docker images -q)

As far as I'm aware there really shouldn't be anything left now. And it looks that way:

$ docker images    
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

Same for volumes:

$ docker volume ls
DRIVER              VOLUME NAME

And for containers:

$ docker container ls   
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Unfortunately, I still get errors like this one:

$ docker-compose up
Pulling adminer (adminer:latest)...
latest: Pulling from library/adminer
90f4dba627d6: Pulling fs layer
19ae35d04742: Pulling fs layer
6d34c9ec1436: Download complete
729ea35b870d: Waiting
bb4802913059: Waiting
51f40f34172f: Waiting
8c152ed10b66: Waiting
8578cddcaa07: Waiting
e68a921e4706: Waiting
c88c5cb37765: Waiting
7e3078f18512: Waiting
42c465c756f0: Waiting
0236c7f70fcb: Waiting
6c063322fbb8: Waiting
ERROR: open /var/lib/docker/tmp/GetImageBlob865563210: no space left on device

Some data about my Docker installation:

$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 17.06.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 15
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
apparmor
seccomp
  Profile: default
Kernel Version: 4.10.0-32-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.685GiB
Name: engelbert
ID: UO4E:FFNC:2V25:PNAA:S23T:7WBT:XLY7:O3KU:VBNV:WBSB:G4RS:SNBH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

And my disk info:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3,9G     0  3,9G   0% /dev
tmpfs           787M   10M  778M   2% /run
/dev/nvme0n1p3   33G   25G  6,3G  80% /
tmpfs           3,9G   46M  3,8G   2% /dev/shm
tmpfs           5,0M  4,0K  5,0M   1% /run/lock
tmpfs           3,9G     0  3,9G   0% /sys/fs/cgroup
/dev/loop0       81M   81M     0 100% /snap/core/2462
/dev/loop1       80M   80M     0 100% /snap/core/2312
/dev/nvme0n1p1  596M   51M  546M   9% /boot/efi
/dev/nvme0n1p5  184G   52G  123G  30% /home
tmpfs           787M   12K  787M   1% /run/user/121
tmpfs           787M   24K  787M   1% /run/user/1000

And:

$ df -hi /var/lib/docker
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/nvme0n1p3   2,1M  2,0M   68K   97% /

As said, I'm still experimenting, so I'm not sure if I've posted all relevant info - let me know if you need more.

Anyone any idea what more could be the issue?

Upvotes: 21

Views: 27113

Answers (3)

LoW
LoW

Reputation: 604

This may not directly answer the question but it can be useful in general if the Dockerfile used to create the image is available.

Make sure in particular to limit the number of layers that will be generated, hence, when writing the Dockerfile, avoid doing this:

RUN apt-get update && sudo apt-get install -y package1 
RUN apt-get update && sudo apt-get install -y package2 
RUN apt-get update && sudo apt-get install -y package3 

and do this instead:

RUN apt-get update && sudo apt-get install -y \
    package1 \
    package2 \
    package3 

Doing so drastically reduced the size of the image as well as the inode usage, since less layers are generated. This helped address the issue in my case (where the inodes would all get used up).

Make sure to remove the potential intermediate image that was generated by the failed build to free up space docker rmi <IMAGE_ID>.

For more tips, you can check out this site about Optimizing Docker images.

Upvotes: 1

Tarang Srivastava
Tarang Srivastava

Reputation: 116

For future reference, if you have removed all the containers you can also try docker system prune which will remove dangling images, containers and anything else.

Upvotes: 2

Rob Lockwood-Blake
Rob Lockwood-Blake

Reputation: 5056

The problem is that /var/lib/docker is on the / filesystem, which is running out of inodes. You can check this by running df -i /var/lib/docker

Since /home's filesystem has sufficient inodes and disk space, moving Docker's working directory there there should get it going again.

(Note that the this assumes there is nothing valuable in the current Docker install.)

First stop the Docker daemon. On Ubuntu, run

sudo service docker stop

Then move the old /var/lib/docker out of the way:

sudo mv /var/lib/docker /var/lib/docker~

Now create a directory on /home:

sudo mkdir /home/docker

and set the required permissions:

sudo chmod 0711 /home/docker

Link the /var/lib/docker directory to the new working directory:

sudo ln -s /home/docker /var/lib/docker

Then restart the Docker daemon:

sudo service docker start

Then it should work again.

Upvotes: 42

Related Questions