Reputation: 24603
I'm running docker via CoreOS and AWS's ECS. I had a failing image that got restarted many times, and the containers are still around- they filled my drive partition. Specifically, /var/lib/docker/overlay/
contains a large number of files/directories.
I know that docker-cleanup-volumes is a thing, but it cleans the /volumes directory, not the /overlay directory.
docker ps -a
shows over 250 start attempts on my bad docker container. They aren't running, though.
Aside from rm -rf /var/lib/docker/overlay/*
, how can I/should I clean this up?
Upvotes: 123
Views: 216780
Reputation: 2456
From our side we used:
sudo docker system prune -a -f
Which saved me 3Go!
We also used the famous commands:
sudo docker rm -v $(sudo docker ps -a -q -f status=exited)
sudo docker rmi -f $(sudo docker images -f "dangling=true" -q)
docker volume ls -qf dangling=true | xargs -r docker volume rm
We put that on cron to manage a little bit more efficently our disk space.
Reference: https://forums.docker.com/t/some-way-to-clean-up-identify-contents-of-var-lib-docker-overlay/30604/4
Careful with that 3rd command (... xargs -r docker volume rm). Make sure the containers that may use those volumes are running when you run this. Otherwise, they will be seen as dangling, and therefore deleted.
Upvotes: 219
Reputation: 21364
It's not real!
/var/lib/docker/overlay2
, as the name suggests, contains a bunch of overlay
file systems (good intro here). Run mount | grep overlay2
and you will see that all the /var/lib/docker/overlay2/*/merged
folders are of mounts of type overlay
. This means that what du
says isn't real. To get a sense of how much space is actually being used in those folders (on disk) you need to limit your attention to just the upper
directory of the overlayfs mount (called diff
in docker's case), e.g.:
du -sch /var/lib/docker/overlay2/*/diff
...
3.8G total
for comparison, in my case:
> du -sch /var/lib/docker/overlay2
...
17G total
Update
It seems that you can also simply use du
's -x
/--one-file-system
option ("skip directories on different file systems") to only see the real part:
> du -schx /var/lib/docker/overlay2
Upvotes: 11
Reputation: 171
docker builder prune
this helps cleaning up the docker overlay2 folder
Upvotes: 17
Reputation: 789
Please be aware that docker prune
commands do not clean /var/lib/docker/overlay2
directory.
It is also not advised to remove only the overlay directory as it may impact existing containers.
I have searched for a lot of articles but couldn't find any solution to clean overlay directory other than cleaning the entire docker state:
# Please understand that this will restart the docker engine in a completely empty state
# i.e. you will lose all images, containers, volumes, networks, swarm state, etc.
# You can obviously first take the backup of the directories that you want to keep and copy the contents back after restarting docker service.
service stop docker
rm -rf /var/lib/docker
service start docker
The last command brings the docker service back up with all the folders inside /var/lib/docker again.
Upvotes: 2
Reputation: 130
I followed these simple steps
Step 1: df -h [checked the memory used, to be sure, memory is used by overlay folder].
Step 2: sudo docker system prune [this cmd removes all unused containers/images/networks]
Step 3: sudo docker image prune -a [for any dangling images, if present]
Step 4: df -h [to be sure, overlay data is removed].
Upvotes: 2
Reputation: 4890
Here is a working option:
docker rm -f $(docker ps -a |awk 'NR>1&&/Exited/{print $1}')
Upvotes: 1
Reputation: 802
I have added this to bashrc in my dev environment, and gotten used to running it every day or so.
function cleanup_docker() {
docker ps -f status=exited -q | xargs -r docker rm
docker images -f dangling=true -q | xargs -r docker rmi
}
In some cases, the following script can free up more space, as it will try to remove all images, and just fail silently:
function cleanup_docker_aggressive() {
for i in $(docker images --no-trunc -q | sort -u)
do
docker rmi $i 2> /dev/null
done
}
Sadly, they're not significantly cleaner than your solution.
EDIT: Starting with Docker 1.13, you can use docker system:
docker system df # to check what is using space
docker system prune # cleans up also networks, build cache, etc
EDIT: Starting with Docker 2017.09, you can also use container and image
docker container prune
docker image prune -a
the latter you can use with fancy filters like --filter "until=24h"
Upvotes: 14
Reputation: 49
here is resolution to clean docker overlay directory from https://lebkowski.name/docker-volumes/
docker images --no-trunc | grep '<none>' | awk '{ print $3 }' | xargs -r docker rmi
docker ps --filter status=dead --filter status=exited -aq | xargs docker rm -v
for Docker < 1.9 :
find '/var/lib/docker/volumes/' -mindepth 1 -maxdepth 1 -type d | grep -vFf <(docker ps -aq | xargs docker inspect | jq -r '.[]|.Mounts|.[]|.Name|select(.)')
Or for Docker >=1.9 :
docker volume ls -qf dangling=true | xargs -r docker volume rm
Upvotes: 0
Reputation: 139
We just started having this problem, and btafarelo's answer got me part of the way, or at least made me feel better about removing the sha256 entries.
System info: ec2 instances running CoreOS 1.12 behind an ELB
Shutdown docker
systemctl stop docker
rm -rf /var/lib/docker/overlay/*
Execute the results of the commands
for d in $(find /var/lib/docker/image/overlay -type d -name '*sha256*'); do echo rm -rf $d/* ; done
reboot (easiest way to bring everything back up)
This recovered about 25% of the disk after the services restarted with no ill side affects.
Upvotes: 10
Reputation: 7468
Docker garbage collection can be done in an easy way using another docker container https://github.com/spotify/docker-gc
You could make it run as a cron using https://github.com/flaccid/docker-docker-gc-crond
Upvotes: 1
Reputation: 2026
docker ps
--quiet
--all
--filter status=exited
docker rm
docker images
docker rmi
Your hacky way is fine.
docker rm `docker ps -a | grep Exited | awk '{print $1 }'`
My hacky way is
docker rm $(docker ps --all | awk '/ago/{print $1}')
A slightly cleaner way is to run docker ps
with the --quiet
(-q) flag to get just the id number and --filter status=exited
to --filter just the exited ones.
docker rm $(docker ps --filter status=exited --quiet) # remove stopped docker processes
or to run docker rm
with the --force
(-f) flag and docker ps
with the --all
(-a) flag to shut down even the running ones
docker rm --force $(docker ps --all --quiet) # remove all docker processes
What's probably taking up all that disk space after several failed builds is the images. To conserve disk space on the docker host, periodically remove unused docker images with
docker rmi $(docker images --filter dangling=true --quiet) # clean dangling docker images
or to get more aggressive, you can --force
(-f) it to clean up --all
(-a) images
docker rmi --force $(docker images --all --quiet) # clean all possible docker images
@analytik 's way of putting it into a .bashrc function seems like a practical idea
function cleanup_docker() {
docker rm --force $(docker ps --all --quiet) # remove all docker processes
docker rmi $(docker images --filter dangling=true --quiet) # clean dangling docker images
}
and if you're in the habit of generating lots of docker images that you don't need, add it to .bash_logout
Upvotes: 7
Reputation: 24603
Here's the hacky way I'm doing this right now. I'm not going to accept it as an answer because I'm hoping there's a better way.
# delete old docker processes
docker rm `docker ps -a | grep Exited | awk '{print $1 }'`
ignore_errors: true
# delete old images. will complain about still-in-use images.
docker rmi `docker images -aq`
Upvotes: 15