Reputation: 11
I am hosting some simple docker containers. I am wondering that the container size is increasing over time quickly and I do not know how to figure out the problem.
Size reported by Docker:
me@somewhere:~$ sudo docker ps -s
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
02b30add1cb3 my-service "npm start" 23 hours ago Up 23 hours 3001/tcp, 0.0.0.0:9017->9017/tcp my-service-frontend 0 B (virtual 776.4 MB)
20a2be4931e7 my-service "phantomjs src/sites/" 23 hours ago Up 23 hours 0.0.0.0:3007->3001/tcp my-service-5 6.144 kB (virtual 776.4 MB)
ba340ba08941 my-service "phantomjs src/sites/" 23 hours ago Up 23 hours 0.0.0.0:3006->3001/tcp my-service-4 6.144 kB (virtual 776.4 MB)
7b5411d8a171 my-service "phantomjs src/sites/" 23 hours ago Up 23 hours 0.0.0.0:3003->3001/tcp my-service-1 6.144 kB (virtual 776.4 MB)
b583a544b37d my-service "phantomjs src/sites/" 23 hours ago Up 23 hours 0.0.0.0:3001->3001/tcp my-service-0 6.144 kB (virtual 776.4 MB)
91373086e06e foo_bar "/bin/sh -c 'git pull" 47 hours ago Up 47 hours 0.0.0.0:12776->8080/tcp kickass_murdock 11.26 MB (virtual 1.081 GB)
Size reported by du
on host:
me@somewhere:~$ sudo du -h -d 1 /var/lib/docker/containers
14G /var/lib/docker/containers/20a2be4931e7a10b2e29260b541e3c4d6581462650e47d59682f84626843752b
1,6G /var/lib/docker/containers/7b5411d8a171a35a3c937d62dbdea141fc0a9f3c4de25a2da3a0b94ea71a8f3d
9,6M /var/lib/docker/containers/02b30add1cb3ba6d5be1c36b2c9dd141d8d70cb88a021d2363af5684ef3c220f
480K /var/lib/docker/containers/91373086e06ea83269465e0b026cfe7ca0158a1315b0df04da9a1d1b4ee52823
13G /var/lib/docker/containers/b583a544b37db6144f17a4819ca2f636126b11d668caab3dcdbf4c3a33dedc65
13G /var/lib/docker/containers/ba340ba08941d47af45230be328ef7289c19b6bb6a0d120cf2098cbdd9983f65
40G /var/lib/docker/containers`
Size reported by du
for a container (similar output for all other containers):
me@somewhere:~$ sudo docker exec -it my-service-4 du -h -d1 -c /
58M /root
0 /dev
3.0M /etc
706M /usr
1.4M /tmp
14M /var
9.0M /bin
32M /lib
4.0K /home
8.0K /run
4.0K /mnt
4.0K /boot
0 /sys
4.0K /opt
4.0K /srv
4.0K /lib64
3.9M /sbin
du: cannot access '/proc/12642/task/12642/fd/3': No such file or directory
du: cannot access '/proc/12642/task/12642/fdinfo/3': No such file or directory
du: cannot access '/proc/12642/fd/3': No such file or directory
du: cannot access '/proc/12642/fdinfo/3': No such file or directory
0 /proc
4.0K /media
825M /
825M total
So: Both the container and docker ps
report disk usage below 1G, though the actual container file size is more than 10 GB (at least for some). Can any body help me and tell what is happening? I guess there is some trouble going on within my container, though I do not know where to look right now. Anybody knows what I have to do?
Upvotes: 1
Views: 2135
Reputation: 1156
Docker put one layer over the other in its onion file system whenever changes are stored to an image. When you delete your container docker rmi -f CONTAINERID
, you should see that /var/lib/docker/containers uses less space.
Changes to an image are stored when you build it. If you just use an image - run a container - the changed data is just "hold". So you should investigate what happens IN your container. Esp. see what your phantom server produces in the filesystem. ncdu would be a good tool for that.
Start with ncdu and then store the files that are produced in an attached folder of your hosts filesystem. docker run -it -v FULLPATHATHOST:FULLPATHWITHINCONTAINER CONTAINERID
Test
run a simple container, containing nothing more than the os (in my case alpine)
docker run -it stk/alpine:base sh
At the host go to /var/lib/docker/aufs/diff/
and list the containing directories with ncdu. (Of course you can use any other program you like to determine the diretories sizes)
ncdu 1.10 ~ Use the arrow keys to navigate, press ? for help
--- /var/lib/docker/aufs/diff --------------------------------------------------
/..
286,4MiB [##########] /39f3e2ea0dfe17366b8cd7b0...bf3681b99a1081e33ad62a509f28
214,3MiB [####### ] /2de39307b9361cae12f0116e...d28056e4699b21b9a4d34f374461
207,7MiB [####### ] /92ec6d044cb3e39ae0050012...78d0591675f2231daafbf0877778
154,0MiB [##### ] /9f3806e6bedc8fb01929131b...e01aa1980aadba914fdd9d2f96ae
149,5MiB [##### ] /5f0ca2331640639507d85b83...693659438367311abb0c792b8a62
136,8MiB [#### ] /902b87aaaec929e805414868...1f529ad7f37ab300d4ef9f3a0dbf
136,2MiB [#### ] /222ba86561913d299deb9e0e...6b5f5ec117b01386a4156d092687
132,6MiB [#### ] /8b3a9a9eeaf8ed59f24f21a2...dfe8d033890a2fa44b445deb2e3c
128,5MiB [#### ] /72b3edf317a8d682466c1500...a5e2cad31c8305ed42c41cd61149
117,0MiB [#### ] /818e3763e72ef82b28b0552e...b9f163dc601d266e94e46fd26bb0
57,4MiB [## ] /eeffdfafed9f60771b5bf87a...e8bbd16b572f77899c8e689d174d
56,1MiB [# ] /6976ce3ed5fab37382d90467...37578332417ffcf35a1d499eba52
51,3MiB [# ] /a5a6e0549d247f1c8b81a350...c5071f46d17afe2f8988817360b3
Total disk usage: 2,7GiB Apparent size: 2,7GiB Items: 168658
Within the container execute something like
tr -dc A-Za-z0-9 </dev/urandom | head -c 409600000 > a.txt && ls a.txt -all -h
that will create a file with random data called a.txt
I choosed the size 409600000
to be greater than 286,4MiB - the largest folder in /var/lib/docker/aufs/diff/
- so that ncdu shall show it on top
ncdu 1.10 ~ Use the arrow keys to navigate, press ? for help
--- /var/lib/docker/aufs/diff --------------------------------------------------
/..
390,6MiB [##########] /0720a07653a57d938c861cf3...e61c81c29f12289759f0560aa38f
286,4MiB [####### ] /39f3e2ea0dfe17366b8cd7b0...bf3681b99a1081e33ad62a509f28
214,3MiB [##### ] /2de39307b9361cae12f0116e...d28056e4699b21b9a4d34f374461
207,7MiB [##### ] /92ec6d044cb3e39ae0050012...78d0591675f2231daafbf0877778
154,0MiB [### ] /9f3806e6bedc8fb01929131b...e01aa1980aadba914fdd9d2f96ae
149,5MiB [### ] /5f0ca2331640639507d85b83...693659438367311abb0c792b8a62
136,8MiB [### ] /902b87aaaec929e805414868...1f529ad7f37ab300d4ef9f3a0dbf
136,2MiB [### ] /222ba86561913d299deb9e0e...6b5f5ec117b01386a4156d092687
132,6MiB [### ] /8b3a9a9eeaf8ed59f24f21a2...dfe8d033890a2fa44b445deb2e3c
128,5MiB [### ] /72b3edf317a8d682466c1500...a5e2cad31c8305ed42c41cd61149
117,0MiB [## ] /818e3763e72ef82b28b0552e...b9f163dc601d266e94e46fd26bb0
57,4MiB [# ] /eeffdfafed9f60771b5bf87a...e8bbd16b572f77899c8e689d174d
56,1MiB [# ] /6976ce3ed5fab37382d90467...37578332417ffcf35a1d499eba52
Total disk usage: 3,0GiB Apparent size: 3,0GiB Items: 168678
Now I know that it's the directory starting with 0720a07653a57d9...
is that I have to look for. Go into it and list the content
root@T520:/var/lib/docker/aufs/diff# cd 0720a07653a57d938c861cf32f4bee87fa4be61c81c29f12289759f0560aa38f
root@T520:/var/lib/docker/aufs/diff/0720a07653a57d938c861cf32f4bee87fa4be61c81c29f12289759f0560aa38f# ls -all -h
insgesamt 391M
drwxr-xr-x 5 root root 4,0K Feb 23 10:55 .
drwxr-xr-x 674 root root 80K Feb 23 10:55 ..
-rw-r--r-- 1 root root 391M Feb 23 10:57 a.txt
drwx------ 2 root root 4,0K Feb 23 10:55 root
-r--r--r-- 1 root root 0 Feb 23 10:55 .wh..wh.aufs
drwx------ 2 root root 4,0K Feb 23 10:55 .wh..wh.orph
drwx------ 2 root root 4,0K Feb 23 10:55 .wh..wh.plnk
As you can see, there is the file a.txt
listed.
Now rerun the procedure, random file creation and relist ncdu (just hit r
in ncdu.)
ncdu
should show you, as well as ls
should do, that the directory size did not change. So the data within the docker fs is just overwritten. If you choose a smale size it gets smaller.
So how might this help you? As I showed above there is no file growth for changing data within files. And you can find out witch directory contains you filesystem and see the plain filestructure of added/changed files within your container.
Hope this helps to find the files within your container.
If you exit your container and restart it by the same command again, a new instance is created, with it own fs layer is created.
You can find the ids of your stopped containers use
docker ps -a | grep Exited | grep stk/alpine:base | awk '{print $1 }'
To see what is found before deleteing...
docker ps -a | grep Exited | grep stk/alpine:base
7b7d3f6e857a stk/alpine:base "sh" 22 minutes ago Exited (0) 2 minutes ago gigantic_swartz
2f51ea988a28 stk/alpine:base "sh" 23 minutes ago Exited (0) 22 minutes ago cranky_euler
4bfafbb034fe stk/alpine:base "sh" 34 minutes ago Exited (0) 25 minutes ago sick_williams
80cd5687fcd7 stk/alpine:base "sh" 44 minutes ago Exited (137) 37 minutes ago determined_panini
a2179a8dd543 stk/alpine:base "sh" 58 minutes ago Exited (130) 44 minutes ago agitated_shockley
8596cd310292 stk/alpine:base "sh" 3 days ago Exited (137) 3 days ago dreamy_murdock
33db61a7830b stk/alpine:base "sh" 3 days ago Exited (0) 3 days ago desperate_hodgkin
2f96c15dc8a1 stk/alpine:base "sh" 2 weeks ago Exited (0) 2 weeks ago determined_babbage
Attach | xargs -r docker rm
to delete them
One line solution
docker ps -a | grep Exited | grep stk/alpine:base | awk '{print $1 }' | xargs -r docker rm
Docker will check that the images are not used by other images and complain if it can't be removed if you do docker rmi
. But in this case you want the containers and not the images to be deleted. So use rm
instead of rmi
(I updated the answer)
Enjoy
Upvotes: 1