Reputation: 5544
After some crashes with a docker container with a too low mem_limit, how can i check in a container the mem_limit of this container? I want to print an error message on startup and exit if the mem_limit is set to low.
Upvotes: 32
Views: 40155
Reputation: 405
You have to check all values from the path defined in /prof/self/cgroup
(example: /sys/fs/cgroup/memory/user.slice/user-1501.slice/session-99.scope
) up to /sys/fs/cgroup/memory
and look for the minimum. Here is the script:
#!/bin/bash
function memory_limit {
[ -r /proc/self/cgroup ] || { echo >&2 "Cannot read /proc/self/cgroup" ; return 1; }
path=$(grep -Poh "memory:\K.*" /proc/self/cgroup)
[ -n "$path" ] || { echo >&2 "Cannot get memory constrains from /proc/self/cgroup" ; return 1; }
full_path="/sys/fs/cgroup/memory${path}"
cd "$full_path" || { echo >&2 "cd $full_path failed" ; return 1; }
[ -r memory.limit_in_bytes ] || { echo >&2 "Cannot read 'memory.limit_in_bytes' at $(pwd)" ; return 1; }
min=$(cat memory.limit_in_bytes)
while [[ $(pwd) != /sys/fs/cgroup/memory ]]; do
cd .. || { echo >&2 "cd .. failed in $(pwd)" ; return 1; }
[ -r memory.limit_in_bytes ] || { echo >&2 "Cannot read 'memory.limit_in_bytes' at $(pwd)" ; return 1; }
val=$(cat memory.limit_in_bytes)
(( val < min )) && min=$val
done
echo "$min"
}
memory_limit
21474836480
In my situation, I have
cat /proc/self/cgroup
3:memory:/user.slice/user-1501.slice/session-99.scope
cat /sys/fs/cgroup/memory/user.slice/user-1501.slice/session-99.scope/memory.limit_in_bytes
9223372036854771712
cat /sys/fs/cgroup/memory/user.slice/user-1501.slice/memory.limit_in_bytes
21474836480 <= actual limit
cat /sys/fs/cgroup/memory/user.slice/memory.limit_in_bytes
9223372036854771712
cat /sys/fs/cgroup/memory/memory.limit_in_bytes
9223372036854771712
Thanks to Mandragor for the original idea.
Upvotes: 3
Reputation: 4577
Previously the /sys/fs/cgroup/memory/memory.limit_in_bytes
worked for me, but in my ubuntu with kernel 5.8.0-53-generic
seems that the correct endpoint now is /sys/fs/cgroup/memory.max
to recover the memory limit from inside the container.
Upvotes: 9
Reputation: 70319
On the host you can run docker stats
to get a top
like monitor of your running containers. The output looks like:
$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
729e4e0db0a9 dev 0.30% 2.876GiB / 3.855GiB 74.63% 25.3MB / 4.23MB 287kB / 16.4kB 77
This is how I discovered that docker run --memory 4096m richardbronosky/node_build_box npm run install
was not getting 4G of memory because Docker was configured to limit to 2G of memory. (In the example above this has been corrected.) Without that insite I was totally lost as to why my process was ending with simply "killed
".
Upvotes: 20
Reputation: 5544
Worked for me in the container, thanks for the ideas Sebastian
#!/bin/sh
function memory_limit
{
awk -F: '/^[0-9]+:memory:/ {
filepath="/sys/fs/cgroup/memory"$3"/memory.limit_in_bytes";
getline line < filepath;
print line
}' /proc/self/cgroup
}
if [[ $(memory_limit) < 419430400 ]]; then
echo "Memory limit was set too small. Minimum 400m."
exit 1
fi
Upvotes: 9
Reputation: 17413
The memory limit is enforced via cgroups. Therefore you need to use cgget
to find out the memory limit of the given cgroup.
To test this you can run a container with a memory limit:
docker run --memory 512m --rm -it ubuntu bash
Run this within your container:
apt-get update
apt-get install cgroup-bin
cgget -n --values-only --variable memory.limit_in_bytes /
# will report 536870912
Docker 1.13 mounts the container's cgroup to /sys/fs/cgroup
(this could change in future versions). You can check the limit using:
cat /sys/fs/cgroup/memory/memory.limit_in_bytes
Upvotes: 39