J.Main
J.Main

Reputation: 311

No space left on device when pulling docker image from AWS

I am pulling a variety of docker images from my AWS, but it keeps getting stuck on the final image with the following error

ERROR: for <container-name>  failed to register layer: Error processing tar file(exit status 1): symlink libasprintf.so.0.0.0 /usr/lib64/libasprintf.so: no space left on device
ERROR: failed to register layer: Error processing tar file(exit status 1): symlink libasprintf.so.0.0.0 /usr/lib64/libasprintf.so: no space left on device

Does anyone know how to fix this problem?

I have tried stopping docker, removing var/lib/docker and starting it back up again but it gets stuck at the same place

result of

df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/nvme0n1p1  8.0G  6.5G  1.6G  81% /

devtmpfs        3.7G     0  3.7G   0% /dev

tmpfs           3.7G     0  3.7G   0% /dev/shm

tmpfs           3.7G   17M  3.7G   1% /run

tmpfs           3.7G     0  3.7G   0% /sys/fs/cgroup

tmpfs           753M     0  753M   0% /run/user/0

tmpfs           753M     0  753M   0% /run/user/1000

Upvotes: 9

Views: 11503

Answers (3)

Nitin Nain
Nitin Nain

Reputation: 5483

It might be that the older docker images, volumes, etc. are still stuck in your EBS storage. From the docker docs:

Docker takes a conservative approach to cleaning up unused objects (often referred to as “garbage collection”), such as images, containers, volumes, and networks: these objects are generally not removed unless you explicitly ask Docker to do so. This can cause Docker to use extra disk space.

SSH into your EC2 instance and verify that the space is actually taken up:

ssh ec2-user@<public-ip>
df -h

Then you can prune the old images out:

docker system prune

Read the warning message from this command!

You can also prune the volumens. Do this if you're not storing files locally (which you shouldn't be anyway, they should be in something like AWS S3)

Use with Caution:

docker system prune --volumes

Upvotes: 5

Daniel Smith
Daniel Smith

Reputation: 1034

I wrote an article about this after struggling with the same issue. If you have deployed successfully before, you may just need to add some maintenance to your deploy process. In my case, I just added cronjob to run the following:

docker ps -q --filter "status=exited" | xargs --no-run-if-empty docker rm;
docker volume ls -qf dangling=true | xargs -r docker volume rm;

https://medium.com/@_ifnull/aws-ecs-no-space-left-on-device-ce00461bb3cb

Upvotes: 2

J.Main
J.Main

Reputation: 311

The issue was with the EC2 instance not having enough EBS storage assigned to it. Following these steps will fix it:

  • Navigate to ec2
  • Look at the details of your instance and locate root device and block device
  • press the path and select EBS ID
  • click actions in the volume panel
  • select modify volume
  • enter the desired volume size (default is 8GB, shouldn’t need much more)
  • ssh into instance
  • run lsblk to see available volumes and note the size
  • run sudo growpart /dev/volumename 1 on the volume you want to resize
  • run sudo xfs_growfs /dev/volumename (the one with / in mountpoint column of lsblk)

Upvotes: 11

Related Questions