Nicolas Landier
Nicolas Landier

Reputation: 140

How to let a Python process use all Docker container memory without getting killed?

I have a Python process that does some heavy computations with Pandas and such - not my code so I basically don't have much knowledge on this.

The situation is this Python code used to run perfectly fine on a server with 8GB of RAM maxing all the resources available.

We moved this code to Kubernetes and we can't make it run: even increasing the allocated resources up to 40GB, this process is greedy and will inevitably try to get as much as it can until it gets over the container limit and gets killed by Kubernetes.

I know this code is probably suboptimal and needs rework on its own.

However my question is how to get Docker on Kubernetes mimic what Linux did on the server: give as much as resources as needed by the process without killing it?

Upvotes: 3

Views: 3676

Answers (2)

berkorbay
berkorbay

Reputation: 465

Using --oom-kill-disable option with a memory limit works for me (12GB memory) in a Docker container. Perhaps it applies to Kubernetes as well.

docker run -dp 80:8501 --oom-kill-disable -m 12g <image_name> 

Hence How to mimic "--oom-kill-disable=true" in kuberenetes?

Upvotes: 0

caarlos0
caarlos0

Reputation: 20633

I found out that running something like this seems to work:

import resource
import os

if os.path.isfile('/sys/fs/cgroup/memory/memory.limit_in_bytes'):
    with open('/sys/fs/cgroup/memory/memory.limit_in_bytes') as limit:
        mem = int(limit.read())
        resource.setrlimit(resource.RLIMIT_AS, (mem, mem))

This reads the memory limit file from cgroups and set it as both hard and soft limit for the process' max area address space.

You can test by runnning something like:

docker run --it --rm -m 1G --cpus 1 python:rc-alpine

And then trying to allocate 1G of ram before and after running the script above.

With the script, you'll get a MemoryError, without it the container will be killed.

Upvotes: 3

Related Questions