Reputation: 2700
I have a container mounted using docker-compose version 2 which has a memory limit on it of 32mb.
Whenever I run the container I can monitor the used resources like so:
docker stats 02bbab9ae853
It shows the following:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
02bbab9ae853 client-web_postgres-client-web_1_e4513764c3e7 0.07% 8.078MiB / 32MiB 25.24% 5.59MB / 4.4MB 135GB / 23.7MB 0
What looks really weird to me is the memory part:
8.078MiB / 32MiB 25.24%
If outside the container I list of Postgres PIDs I get:
$ pgrep postgres
23051, 24744, 24745, 24746, 24747, 24748, 24749, 24753, 24761
If I stop the container and re-run the above command I get no PID. That is a clear proof that all PID where created by the stopped container.
Now, if I re-run the container and get every PID and I calculate its RSS memory usage and I sum it together with a python method, I don't get ~8Mb docker is telling me but a much higher value not even close to it (like ~100Mb or so).
This is the python method I'm using to calculate the RSS memory:
def get_process_memory(name):
total = 0.0
try:
for pid in map(int, check_output(["pgrep",name]).split()):
total += psutil.Process(pid).memory_info().rss
except Exception as e:
pass
return total
Does anybody know why the memory declared by docker is so different?
This is of course a problem for me because the memory limit applied doesn't look respected.
I'm using a Raspberry PI.
Upvotes: 2
Views: 1805
Reputation: 21
That's because Docker is reporting only RSS from cgroups memory.stats, but you actually need to sum up cache, rss and swap (https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt). More info about that in https://sysrq.tech/posts/docker-misleading-containers-memory-usage/
Upvotes: 2