qichao_he
qichao_he

Reputation: 5424

Is it safe to clean docker/overlay2/

I got some docker containers running on AWS EC2, the /var/lib/docker/overlay2 folder grows very fast in disk size.

I'm wondering if it is safe to delete its content? or if docker has some kind of command to free up some disk usage.


UPDATE:

I actually tried docker system prune -a already, which reclaimed 0Kb.

Also my /docker/overlay2 disk size is much larger than the output from docker system df

After reading docker documentation and BMitch's answer, I believe it is a stupid idea to touch this folder and I will try other ways to reclaim my disk space.

Upvotes: 458

Views: 570524

Answers (27)

Landaida
Landaida

Reputation: 120

I had the same issue on debian 12 after some months, like 3 or 4 months mi 1TB of space reduce to 50gb, docker save data related to containers, images and images cache on overlay2 directory, the below script preserve all of these linked directories and remove the garbage ones, to really removed the directories uncomment the "rm" command, I hope help someone.

#!/bin/bash

# Define the overlay2 directory
overlay_dir="/var/lib/docker/overlay2"

# Verify that the overlay2 directory exists
if [ ! -d "$overlay_dir" ]; then
  echo "The directory $overlay_dir does not exist. Please check the path."
  exit 1
fi

# Get all layer IDs associated with current containers (MergedDir, LowerDir, UpperDir, WorkDir)
container_layer_ids=$(docker ps -qa | xargs docker inspect --format '{{ .GraphDriver.Data.MergedDir }} {{ .GraphDriver.Data.LowerDir }} {{ .GraphDriver.Data.UpperDir }} {{ .GraphDriver.Data.WorkDir }}' | tr ' ' '\n' | tr ':' '\n' | awk -F'/' '{print $(NF-1)}' | sort | uniq)

# Get all layer IDs associated with images
image_layer_ids=$(docker images -qa | xargs docker inspect --format '{{ .GraphDriver.Data.MergedDir }} {{ .GraphDriver.Data.LowerDir }} {{ .GraphDriver.Data.UpperDir }} {{ .GraphDriver.Data.WorkDir }}' | tr ' ' '\n' | tr ':' '\n' | awk -F'/' '{print $(NF-1)}' | sort | uniq)

# Get all cache IDs of type source.local
source_local_cache_ids=$(docker system df -v | grep 'source.local' | awk '{print $1}' | sort | uniq)

# Combine the layer IDs of containers, images, and source.local caches
all_layer_ids=$(echo -e "$container_layer_ids\n$image_layer_ids" | sort | uniq)

# Verify if the retrieval of layer IDs was successful
if [ -z "$all_layer_ids" ]; then
  echo "Error: Could not retrieve the directories of MergedDir, LowerDir, UpperDir, WorkDir, or source.local caches."
  echo "Aborting to avoid accidental deletion of directories."
  exit 1
fi

echo "source_local_cache_ids:"
echo "$source_local_cache_ids"

echo "all_layer_ids:"
echo "$all_layer_ids"

# List all subdirectories in overlay2
overlay_subdirs=$(ls -1 $overlay_dir)

# Find and remove orphan directories that are not in the list of active layers or caches
echo "Searching for and removing orphan directories in $overlay_dir..."

for dir in $overlay_subdirs; do
  # Ignore directories ending in "-init" and the "l" directory
  if [[ "$dir" == *"-init" ]] || [[ "$dir" == "l" ]]; then
    echo "Ignoring special directory: $overlay_dir/$dir"
    continue
  fi

  # Check if the directory name starts with any of the source.local cache IDs
  preserve_dir=false
  for cache_id in $source_local_cache_ids; do
    if [[ "$dir" == "$cache_id"* ]]; then
      preserve_dir=true
      break
    fi
  done

  # If directory should be preserved, skip it
  if $preserve_dir; then
    echo "Preserving cache directory: $overlay_dir/$dir"
    continue
  fi

  # Check if the directory is associated with an active container or image
  if ! echo "$all_layer_ids" | grep -q "$dir"; then
    echo "Removing orphan directory: $overlay_dir/$dir"
    # rm -rf "$overlay_dir/$dir"
  fi
done

echo "Process completed."

Upvotes: 1

Viet Pm
Viet Pm

Reputation: 317

I read other comments, and test myself, so I found some solutions to clean up overlay2 folder and still keep your app data if you store your data in volumes (not in containers). However, ALWAYS BACKUP FOR SAFETY.

1, Solution 1: just use basic docker prune command. Make sure your docker containers are running, then run this command to clean all unused containers, images ...

docker system prune -a

2, Solution 2: using "docker compose down". I found that I usually stop docker app by "docker compose stop" command. Then I use down command and many disk space in overlay2 folder is released.

docker compose down
docker compose up

3, Solution 3: restart docker service

sudo systemctl restart docker

4, Solution 4: directly delete contents in folder overlay2. You'll need to delete your containers and images too, because they're linked to overlay2 folder. But make sure you keep your old volumes so you don't lose your app data and your app can run smoothly after you run docker again. Do all below steps.

# Step 1: down your docker compose to delete your containers and images
docker compose down --rmi

# Step 2: delete other unused containers, images ...
docker system prune -a

# Step 3: delete overlay2 folder in /var/lib/docker/overlay2. 
# Make sure you have permissions to view and delete this folder 
# In my case, I need to switch to root user with "sudo su" command
sudo su
cd /var/lib/docker
rm -rf overlay2
mkdir overlay2

# Step 4: restart your docker service
sudo systemctl restart docker

# Step 5: move to your app folder, rebuild your images (if needed) and up again
docker compose up --build

Upvotes: 8

SickProdigy
SickProdigy

Reputation: 308

No it's not. There are safe commands to run around it though.

First you should make sure logs aren't taking up the space.

truncate -s 0 /var/lib/docker/containers/*/*-json.log

To reduce log output you can add this to compose:

<service_name>
    logging:
        options:
            max-size: "20m"
            max-file: "5"

Or add this to /etc/docker/daemon.json

{
  "log-opts": {
    "max-size": "20m",
    "max-file": "5"
  }
}

Now that you've prevented logs from over-accumulating, you can search a bit further with this command

du -s /var/lib/docker/overlay2/*/diff |sort -n -r

This will list all the folders of the overlay2 directory.

From here you get overlay2/HASH/diff

You can cross reference the hash with container from this command.

docker image inspect $(docker image ls -q)  --format '{{ .GraphDriver.Data.MergedDir}} -> {{.RepoDigests}}' | sed 's|/merged||g'

Stopping the container, pruning, and then restarting the container in question will fix a lot of space issues.

Hope this helps someone.

Upvotes: 4

Kazamaa
Kazamaa

Reputation: 99

i did cleaned the /overlay2 folder. it messed up my system in start. but doing a simple sudo systemctl restart docker, solved the issue

Upvotes: 2

Todd Hammer
Todd Hammer

Reputation: 311

In my case, my disk was filling up and I found that the /tmp directory in my webserver container was full of old PNG files. Apparently, these are not being unlinked by the application once uploaded.

My solution was far simpler than everything here. I just went into the container:

docker exec -it {container_name} bash

and deleted /tmp/*.png and I released a ton of disk space. Sometimes the solution is simple and it's almost always a tmp directory filling up.

Upvotes: 1

lambodar
lambodar

Reputation: 3763

For me prune with image and volume didn't work.

First check the disk space uses. You can relay on ncdu; the best disk space utility cli I can across. It will display directory wise space occupied and there are many useful build-in control to manage filesystem. This will give you a fair idea which particular process and directory occupies more memory.

sudo ncdu -x /var 

Coming back to docker, once you are sure that docker is the one which takes more disk space. You can try prune it and if prune is not cleaning try to clear dangling volume using below command. This will not delete any container or any volume in use.

docker volume rm $(docker volume ls -qf dangling=true)

Also as a standard practice limit container logs. By default, Docker will store container logs indefinitely. You can limit the amount of disk space used by container logs by setting a limit in the Docker daemon configuration file (/etc/docker/daemon.json). For example, you could add the following line to limit container logs to 50MB. If there is not file present with daemon.json you can add one.

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "50m"
  }
}

After making changes to the daemon configuration file, you'll need to restart the Docker daemon with the sudo service docker restart command.

Upvotes: 6

mstgnz
mstgnz

Reputation: 3888

The reason why the overlay2 directory is full is because one of the containers has log records. At least so was my docker. Docker stop and start solves this but it's a temporary solution. You will either close the logs or the containers will be restarted every day.

You can see the areas covered by the containers with the following command.

du -sh /var/lib/docker/containers/*

Upvotes: 0

huyhuyt
huyhuyt

Reputation: 21

In my case, systemctl stop docker then systemctl start docker somehow automatically free space /var/lib/docker/*

Upvotes: 2

Mike
Mike

Reputation: 86

I navigated to the folder containing overlay2. Using du -shc overlay2/*, I found that there was 25G of junk in overlay2. Running docker system prune -af said Total Reclaimed Space: 1.687MB, so I thought it had failed to clean it up. However, I then ran du -shc overlay2/* again only to see that overlay2 had only 80K in it, so it did work.

Be careful, docker lies :).

Upvotes: 5

mheyman
mheyman

Reputation: 4325

Based on Mert Mertce's answer I wrote the following script complete with spinners and progress bars.

Since writing the script, however, I noticed the extra directories on our build servers to be transient - that is Docker appears to be cleaning up, albeit slowly. I don't know if Docker will get upset if there is contention for removing directories. Our current solution is to use docuum with a lot of extra overhead (150+GB).

#!/bin/bash
[[ $(id -u) -eq 0 ]] || exec sudo /bin/bash -c "$(printf '%q ' "$BASH_SOURCE" "$@")"
progname=$(basename $0)
quiet=false
no_dry_run=false
while getopts ":qn" opt
do
    case "$opt" in
      q)
          quiet=true
          ;;
      n)
          no_dry_run=true
          ;;
      ?)
          echo "unexpected option ${opt}"
          echo "usage: ${progname} [-q|--quiet]"
          echo "    -q: no output"
          echo "    -n: no dry run (will remove unused directories)"
          exit 1
          ;;
    esac
done
shift "$(($OPTIND -1))"

[[ ${quiet} = false ]] || exec /bin/bash -c "$(printf '%q ' "$BASH_SOURCE" "$@")" > /dev/null

echo "Running as: $(id -un)"

progress_bar() {
    local w=80 p=$1;  shift
    # create a string of spaces, then change them to dots
    printf -v dots "%*s" "$(( $p*$w/100 ))" ""; dots=${dots// /.};
    # print those dots on a fixed-width space plus the percentage etc.
    printf "\r\e[K|%-*s| %3d %% %s" "$w" "$dots" "$p" "$*";
}

cd /var/lib/docker/overlay2
echo cleaning in ${PWD}
i=1
spi=1
sp="/-\|"
directories=( $(find . -mindepth 1 -maxdepth 1 -type d | cut -d/ -f2) )
images=( $(docker image ls --all --format "{{.ID}}") )
total=$((${#directories[@]} * ${#images[@]}))
used=()
for d in "${directories[@]}"
do
    for id in ${images[@]}
    do
        ((++i))
        progress_bar "$(( ${i} * 100 / ${total}))" "scanning for used directories ${sp:spi++%${#sp}:1} "
        docker inspect $id | grep -q $d
        if [ $? ]
        then
            used+=("$d")
            i=$(( $i + $(( ${#images[@]} - $(( $i % ${#images[@]} )) )) ))
            break
        fi
    done
done
echo -e "\b\b " # get rid of spinner
i=1
used=($(printf '%s\n' "${used[@]}" | sort -u))
unused=( $(find . -mindepth 1 -maxdepth 1 -type d | cut -d/ -f2) )
for d in "${used[@]}"
do
    ((++i))
    progress_bar "$(( ${i} * 100 / ${#used[@]}))" "scanning for unused directories ${sp:spi++%${#sp}:1} "
    for uni in "${!unused[@]}"
    do
        if [[ ${unused[uni]} = $d ]]
        then
            unset 'unused[uni]'
            break;
        fi
    done
done
echo -e "\b\b " # get rid of spinner
if [ ${#unused[@]} -gt 0 ]
then
    [[ ${no_dry_run} = true ]] || echo "Could remove:  (to automatically remove, use the -n, "'"'"no-dry-run"'"'" flag)"
    for d in "${unused[@]}"
    do
        if [[ ${no_dry_run} = true ]]
        then
            echo "Removing $(realpath ${d})"
            rm -rf ${d}
        else
            echo " $(realpath ${d})"
        fi
    done
    echo Done
else
    echo "All directories are used, nothing to clean up."
fi

Upvotes: 2

Maybe this folder is not your problem, don't use the result of df -h with docker. Use the command below to see the size of each of your folders:

echo; pwd; echo; ls -AlhF; echo; du -h --max-depth=1; echo; du-sh

Upvotes: 0

Mattias
Mattias

Reputation: 1091

If your system is also used for building images you might have a look at cleaning up garbage created by the builders using:

docker buildx prune --all

and

docker builder prune --all

Upvotes: 99

Mert Mertce
Mert Mertce

Reputation: 1203

"Official" answer, cleaning with "prune" commands, does not clean actually garbage in overlay2 folder.

So, to answer the original question, what can be done is:

Disclaimer: Be careful when applying this. This may result broking your Docker object!

  • List folder names (hashes) in overlay2
  • Inspect your Docker objects (images, containers, ...) that you need (A stopped container or an image currently not inside any container do not mean that you do not need them).
  • When you inspect, you will see that it gives you the hashes that are related with your object, including overlay2's folders.
  • Do grep against overlay2's folders
  • Note all folders that are found with grep
  • Now you can delete folders of overlay2 that are not referred by any Docker object that you need.

Example:

Let say there are these folders inside your overlay2 directory,

a1b28095041cc0a5ded909a20fed6dbfbcc08e1968fa265bc6f3abcc835378b5
021500fad32558a613122070616963c6644c6a57b2e1ed61cb6c32787a86f048

And what you only have is one image with ID c777cf06a6e3.

Then, do this:

docker inspect c777cf06a6e3 | grep a1b2809
docker inspect c777cf06a6e3 | grep 021500

Imagine that first command found something whereas the second nothing.

Then, you can delete 0215... folder of overlay2:

rm -r 021500fad32558a613122070616963c6644c6a57b2e1ed61cb6c32787a86f048

To answer the title of question:

  • Yes, it is safe deleting dxirectly overlay2 folder if you find out that it is not in use.
  • No, it is not safe deleting it directly if you find out that it is in use or you are not sure.

Upvotes: 13

BMitch
BMitch

Reputation: 263637

Docker uses /var/lib/docker to store your images, containers, and local named volumes. Deleting this can result in data loss and possibly stop the engine from running. The overlay2 subdirectory specifically contains the various filesystem layers for images and containers.

To cleanup unused containers and images, see docker system prune. There are also options to remove volumes and even tagged images, but they aren't enabled by default due to the possibility of data loss:

$ docker system prune --help

Usage:  docker system prune [OPTIONS]

Remove unused data

Options:
  -a, --all             Remove all unused images not just dangling ones
      --filter filter   Provide filter values (e.g. 'label=<key>=<value>')
  -f, --force           Do not prompt for confirmation
      --volumes         Prune volumes

What a prune will never delete includes:

  • running containers (list them with docker ps)
  • logs on those containers (see this post for details on limiting the size of logs)
  • filesystem changes made by those containers (visible with docker diff)

Additionally, anything created outside of the normal docker folders may not be seen by docker during this garbage collection. This could be from some other app writing to this directory, or a previous configuration of the docker engine (e.g. switching from AUFS to overlay2, or possibly after enabling user namespaces).

What would happen if this advice is ignored and you deleted a single folder like overlay2 out from this filesystem? The container filesystems are assembled from a collection of filesystem layers, and the overlay2 folder is where docker is performing some of these mounts (you'll see them in the output of mount when a container is running). Deleting some of these when they are in use would delete chunks of the filesystem out from a running container, and likely break the ability to start a new container from an impacted image. See this question for one of many possible results.


To completely refresh docker to a clean state, you can delete the entire directory, not just sub-directories like overlay2:

# danger, read the entire text around this code before running
# you will lose data
sudo -s
systemctl stop docker
rm -rf /var/lib/docker
systemctl start docker
exit

The engine will restart in a completely empty state, which means you will lose all:

  • images
  • containers
  • named volumes
  • user created networks
  • swarm state

Upvotes: 399

mhe
mhe

Reputation: 1

I recently had a similar issue, overlay2 grew bigger and bigger, But I couldn’t figure out what consumed the bulk of the space.

df showed me that overlay2 was about 24GB in size.

With du I tried to figure out what occupied the space… and failed.

The difference came from the fact that deleted files (mostly log files in my case) where still being used by a process (Docker). Thus the file doesn’t show up with du but the space it occupies will show with df.

A reboot of the host machine helped. Restarting the docker container would probably have helped already… This article on linuxquestions.org helped me to figure that out.

Upvotes: 0

y.selivonchyk
y.selivonchyk

Reputation: 9900

docker system prune -af && docker image prune -af

Upvotes: -2

uylmz
uylmz

Reputation: 1552

I had the same problem, in my instance it was because ´var/lib/docker´ directory was mounted to a running container (in my case google/cadvisor) therefore it blocked docker prune from cleaning the folder. Stopping the container, running docker prune -and then rerunning the container solved the problem.

Upvotes: 1

AymDev
AymDev

Reputation: 7539

Docker apparently keeps image layers of old versions of an image for running containers. It may happen if you update your running container's image (same tag) without stopping it, for example:

docker-compose pull
docker-compose up -d

Running docker-compose down before updating solved it, the downtime is not an issue in my case.

Upvotes: 0

Tiago Barreto
Tiago Barreto

Reputation: 189

Friends, to keep everything clean you can use de commands:

docker system prune -a && docker volume prune

Upvotes: 14

Amit Bondwal
Amit Bondwal

Reputation: 161

adding to above comment, in which people are suggesting to prune system like clear dangling volumes, images, exit containers etc., Sometime your app become culprit, it generated too much logs in a small time and if you using an empty directory volume (local volumes) this fill the /var partitions. In that case I found below command very interesting to figure out, what is consuming space on my /var partition disk.

du -ahx /var/lib | sort -rh | head -n 30

This command will list top 30, which is consuming most space on a single disk. Means if you are using external storage with your containers, it consumes a lot of time to run du command. This command will not count mount volumes. And is much faster. You will get the exact directories/files which are consuming space. Then you can go to those directories and check which files are useful or not. if these files are required then you can move them to some persistent storage by making change in app to use persistent storage for that location or change location of that files. And for rest you can clear them.

Upvotes: 6

Jason Hughes
Jason Hughes

Reputation: 3597

Everything in /var/lib/docker are filesystems of containers. If you stop all your containers and prune them, you should end up with the folder being empty. You probably don't really want that, so don't go randomly deleting stuff in there. Do not delete things in /var/lib/docker directly. You may get away with it sometimes, but it's inadvisable for so many reasons.

Do this instead:

sudo bash
cd /var/lib/docker
find . -type f | xargs du -b  | sort -n

What you will see is the largest files shown at the bottom. If you want, figure out what containers those files are in, enter those containers with docker exec -ti containername -- /bin/sh and delete some files.

You can also put docker system prune -a -f on a daily/weekly cron job as long as you aren't leaving stopped containers and volumes around that you care about. It's better to figure out the reasons why it's growing, and correct them at the container level.

Upvotes: 0

Shankar Thyagarajan
Shankar Thyagarajan

Reputation: 916

DON'T DO THIS IN PRODUCTION

The answer given by @ravi-luthra technically works but it has some issues!

In my case, I was just trying to recover disk space. The lib/docker/overlay folder was taking 30GB of space and I only run a few containers regularly. Looks like docker has some issue with data leakage and some of the temporary data are not cleared when the container stops.

So I went ahead and deleted all the contents of lib/docker/overlay folder. After that, My docker instance became un-useable. When I tried to run or build any container, It gave me this error:

failed to create rwlayer: symlink ../04578d9f8e428b693174c6eb9a80111c907724cc22129761ce14a4c8cb4f1d7c/diff /var/lib/docker/overlay2/l/C3F33OLORAASNIYB3ZDATH2HJ7: no such file or directory

Then with some trial and error, I solved this issue by running

(WARNING: This will delete all your data inside docker volumes)

docker system prune --volumes -a

So It is not recommended to do such dirty clean ups unless you completely understand how the system works.

Upvotes: 3

mirekphd
mirekphd

Reputation: 6781

Backgroud

The blame for the issue can be split between our misconfiguration of container volumes, and a problem with docker leaking (failing to release) temporary data written to these volumes. We should be mapping (either to host folders or other persistent storage claims) all of out container's temporary / logs / scratch folders where our apps write frequently and/or heavily. Docker does not take responsibility for the cleanup of all automatically created so-called EmptyDirs located by default in /var/lib/docker/overlay2/*/diff/*. Contents of these "non-persistent" folders should be purged automatically by docker after container is stopped, but apparently are not (they may be even impossible to purge from the host side if the container is still running - and it can be running for months at a time).

Workaround

A workaround requires careful manual cleanup, and while already described elsewhere, you still may find some hints from my case study, which I tried to make as instructive and generalizable as possible.

So what happened is the culprit app (in my case clair-scanner) managed to write over a few months hundreds of gigs of data to the /diff/tmp subfolder of docker's overlay2

du -sch /var/lib/docker/overlay2/<long random folder name seen as bloated in df -haT>/diff/tmp

271G total

So as all those subfolders in /diff/tmp were pretty self-explanatory (all were of the form clair-scanner-* and had obsolete creation dates), I stopped the associated container (docker stop clair) and carefully removed these obsolete subfolders from diff/tmp, starting prudently with a single (oldest) one, and testing the impact on docker engine (which did require restart [systemctl restart docker] to reclaim disk space):

rm -rf $(ls -at /var/lib/docker/overlay2/<long random folder name seen as bloated in df -haT>/diff/tmp | grep clair-scanner | tail -1)

I reclaimed hundreds of gigs of disk space without the need to re-install docker or purge its entire folders. All running containers did have to be stopped at one point, because docker daemon restart was required to reclaim disk space, so make sure first your failover containers are running correctly on an/other node/s). I wish though that the docker prune command could cover the obsolete /diff/tmp (or even /diff/*) data as well (via yet another switch).

It's a 3-year-old issue now, you can read its rich and colorful history on Docker forums, where a variant aimed at application logs of the above solution was proposed in 2019 and seems to have worked in several setups: https://forums.docker.com/t/some-way-to-clean-up-identify-contents-of-var-lib-docker-overlay/30604

Upvotes: 26

Sarke
Sarke

Reputation: 3235

I found this worked best for me:

docker image prune --all

By default Docker will not remove named images, even if they are unused. This command will remove unused images.

Note each layer in an image is a folder inside the /usr/lib/docker/overlay2/ folder.

Upvotes: 149

user2932688
user2932688

Reputation: 1704

also had problems with rapidly growing overlay2

/var/lib/docker/overlay2 - is a folder where docker store writable layers for your container. docker system prune -a - may work only if container is stopped and removed.

in my i was able to figure out what consumes space by going into overlay2 and investigating.

that folder contains other hash named folders. each of those has several folders including diff folder.

diff folder - contains actual difference written by a container with exact folder structure as your container (at least it was in my case - ubuntu 18...)

so i've used du -hsc /var/lib/docker/overlay2/LONGHASHHHHHHH/diff/tmp to figure out that /tmp inside of my container is the folder which gets polluted.

so as a workaround i've used -v /tmp/container-data/tmp:/tmp parameter for docker run command to map inner /tmp folder to host and setup a cron on host to cleanup that folder.

cron task was simple:

  • sudo nano /etc/crontab
  • */30 * * * * root rm -rf /tmp/container-data/tmp/*
  • save and exit

NOTE: overlay2 is system docker folder, and they may change it structure anytime. Everything above is based on what i saw in there. Had to go in docker folder structure only because system was completely out of space and even wouldn't allow me to ssh into docker container.

Upvotes: 50

Tristan
Tristan

Reputation: 797

I had this issue... It was the log that was huge. Logs are here :

/var/lib/docker/containers/<container id>/<container id>-json.log

You can manage this in the run command line or in the compose file. See there : Configure logging drivers

I personally added these 3 lines to my docker-compose.yml file :

my_container:
  logging:
    options:
      max-size: 10m

Upvotes: 78

Ravi Luthra
Ravi Luthra

Reputation: 205

WARNING: DO NOT USE IN A PRODUCTION SYSTEM

/# df
...
/dev/xvda1      51467016 39384516   9886300  80% /
...

Ok, let's first try system prune

#/ docker system prune --volumes
...
/# df
...
/dev/xvda1      51467016 38613596  10657220  79% /
...

Not so great, seems like it cleaned up a few megabytes. Let's go crazy now:

/# sudo su
/# service docker stop
/# cd /var/lib/docker
/var/lib/docker# rm -rf *
/# service docker start
/var/lib/docker# df
...
/dev/xvda1      51467016 8086924  41183892  17% /
...

Nice! Just remember that this is NOT recommended in anything but a throw-away server. At this point Docker's internal database won't be able to find any of these overlays and it may cause unintended consequences.

Upvotes: 13

Related Questions