nos
nos

Reputation: 229098

docker container has a too large file descriptor limit (ulimit -n)

I'm trying to figure out why a container shows a very large limit of open file descriptors: At the host:

bld@nos 14:27:20 0 ~/dev/ (master)
$ ulimit -Hn
4096
bld@nos 14:27:32 0 ~/dev/ (master)
$ ulimit -n
4096

root user on the host has these limits:

# ulimit -Hn
524288
# ulimit -n
1024

Running a centos7 image that was just built:

bld@nos 14:27:35 0 ~/dev/ (master)
$ docker run --rm -ti  -e  -v  -v  -v  bld:centos7 /bin/bash
[root@6d912cda1731 stingasrc]# ulimit -n
1073741816

A docker run -u "$(id -u):$(id -g)" --rm -ti -e -v -v -v bld:centos7 /bin/bash , running as non-root shows the same.

The main issue is processes within the container spawns new threads that iterates through the max filedescriptors and close() them - which takes ... a while, for over a billion descriptors.

While I'm aware that the --ulimit flag can be passed to docker run, I'd like to know:

How and why does docker v20.10.14 apply the ulimit -n of 1073741816 when running this container - and is there a system wide setting for this ?

Upvotes: 3

Views: 5346

Answers (3)

rrauenza
rrauenza

Reputation: 6973

You can also override nofile in systemd. The default in the systemd init files for for docker on some distros makes it infinity, which ends up being 1073741816.

Make this file:

$ cat /etc/systemd/system/containerd.service.d/override.conf
[Service]
LimitNOFILE=10240

Wouldn't hurt to also do dockerd.service.d

This will override the default systemd file and merge your changes into it limiting the number of fd.

Upvotes: 0

akalenyu
akalenyu

Reputation: 21

Big thanks for opening this. I was stracing my own issue and this directed me towards the right direction.

My dnsmasq container was doing exactly The main issue is processes within the container spawns new threads that iterates through the max filedescriptors and close() them - which takes ... a while, for over a billion descriptors.: https://lists.thekelleys.org.uk/pipermail/dnsmasq-discuss/2020q1/013821.html

To overcome this I made

# cat /etc/docker/daemon.json 
{
    "default-ulimits": {
        "nofile": {
            "Name": "nofile",
            "Hard": 1024,
            "Soft": 1024
        },
        "nproc": {
            "Name": "nproc",
            "Soft": 65536,
            "Hard": 65536
        }
    }
}

And sudo systemctl restart docker for the changes to take place.

Upvotes: 2

araisch
araisch

Reputation: 1940

Docker and Linux Kernel has a long history of tweaking things around all this cgroup and limits stuff.

The first ulimit nofile settings are performed by kernel while init or systemd waking up.

Then the docker engine kicks in and forces systemd to overwrite the system settings. If you check /usr/lib/systemd/system/docker.service and nofile is set to infinity it may cause problems. There are some commits changing the value to reasonable values forth and back to infinity. Idk why, but sometimes it is hard to serve easy handling and suite all usecases at once.

Docker Command --ulimit <type>=<soft>:<hard> is writing to /etc/security/limits.conf or /etc/security/limits.d/...conf. For file descriptors type = nofile. This goes also to /etc/docker/daemon.json and overwrites the previously made settings.

So imho for ppl going deep into stuff (and you obviously one of them) best practise to set all this stuff (all those limits) in docker run or docker-compose.yml to values making sense for your use-case.

(All out of mind, may check tomorrow with Linux if something is wrong)

Edit (forgot to answer your question): Newer kernels setting nofile to 1073741816, maybe centOS7 as well. I think (but not know) that Limit settings on host are not handed over to container, engine just sets them for the user running the container on host. So try to set nofile separatly in your container as well (e.g. edit limits.conf).

Upvotes: 0

Related Questions