JP19
JP19

Reputation:

Limiting process memory/CPU usage on linux

I know we can adjust scheduling priority by using the nice command.

However, the man page doesn't say whether it will limit both CPU and memory or only CPU (and in any case, it can't be used to specify absolute limits).

Is there a way to run a process and limit its memory usage to say "X" MB and CPU usage to say "Y" Mhz in Linux?

Upvotes: 8

Views: 10496

Answers (3)

Mikko Rantalainen
Mikko Rantalainen

Reputation: 15925

Linux specific answer:

For historical systems running ulimit -m $LIMIT_IN_KB would have been the correct answer. Nowadays you have to use cgroups and cgexec or systemd-run.

However, for systems that are still transitioning to systemd there does not seem to be any solution that does not require setting up pre-made configuration for each limit you wish to use. This is because such systems (e.g. Debian/Ubuntu) still use "hybrid hierachy cgroups" and systemd supports setting memory limits with newer "unified hierarchy cgroups" only. If your Linux distribution is already running systemd with unified hierarchy cgroups then running a given user mode process with specific limits should work like this

systemd-run --user --pipe -p MemoryMax=42M -p CPUWeight=10 [command-to-run ...]

or

systemd-run --user --scope -p MemoryMax=42M -p CPUWeight=10 [command-to-run ...]

For possible parameters, see man systemd.resource-control.

If I've understood correctly, setting CPUWeight instructs the kernel how much CPU you want to give if CPU is fully tasked and defaults to 100, lower values mean less CPU time if multiple processes compete for CPU time. It will not limit CPU usage if CPU is utilized less than 100% which is usually a good thing. If you truly want to force the process to use less than a single core even when the machine is idle, you can set e.g. CPUQuota=10% to force process to use up to 10% of single core. If you set CPUQuota=200% it means the process can use up to 2 cores on average (but without CPU binding it can use some of its time on more CPUs).

Additional information:

Update (year 2021): It seems that

systemd-run --user --pty -p MemoryMax=42M -p CPUWeight=10 ...

should work if you're running version of systemd including a fix to bug https://github.com/systemd/systemd/issues/9512 – in practice, you need fixes listed here: https://github.com/systemd/systemd/pull/10894

If your system is lacking those fixes, the command

systemd-run --user --pty -p MemoryMax=42M -p CPUWeight=10 ...

appears to work but the memory limits are not actually enforced.

In reality, it seems that Ubuntu 20.04 LTS does not contain required fixes. Following should fail:

$ systemd-run --user --scope -p MemoryMax=42M -p MemorySwapMax=50M -p CPUWeight=10 stress -t 10 --vm-keep --vm-bytes 10m -m 20

because the command stress is expected to require slightly more than 200 MB of RAM but memory limits are set lower. According to bug https://github.com/systemd/systemd/issues/10581 Poettering says that this should work if distro is using cgroupsv2, whatever it means in practice.

I don't know a distro that implements user mode cgroup limits correctly. As a result, you need root to configure the limits.

Update for Ubuntu 22.04 LTS and newer:

Ubuntu finally supports unified hierarchy cgroups on systemd with Ubuntu 22.04 LTS or newer. Now you can run

$ systemd-run --user --scope -p MemoryMax=42M -p MemorySwapMax=50M -p CPUWeight=10 stress -t 10 --vm-keep --vm-bytes 10m -m 20

and the output will be

stress: info: [2633454] dispatching hogs: 0 cpu, 0 io, 20 vm, 0 hdd
stress: FAIL: [2633454] (416) <-- worker 2633472 got signal 9
stress: WARN: [2633454] (418) now reaping child worker processes
stress: FAIL: [2633454] (416) <-- worker 2633473 got signal 9
stress: WARN: [2633454] (418) now reaping child worker processes
stress: FAIL: [2633454] (416) <-- worker 2633474 got signal 9
stress: WARN: [2633454] (418) now reaping child worker processes
stress: FAIL: [2633454] (452) failed run completed in 0s

where 2633454 is the PID of the stress process. If you increase the MemoryMax to 400 MB or more, the process can complete successfully. Note that if you set MemoryMax to exactly 400M the process will run really slow because it would really need a bit more and is forced to swap instead (up to MemorySwapMax) to complete. And if the program spawns multiple programs, kernel may stall those programs if the memory limit would be otherwise exceeded. This is to allow the process to still successfully run in parts even with the limited memory allowed. The kernel will try to avoid totally stalling the programs and OOM Killer will start killing the processes if it's obvious that the limit will be exceeded.

Upvotes: 5

Douglas Leeder
Douglas Leeder

Reputation: 53310

You might want to investigate cgroups as well as the older (obsolete) ulimit.

Upvotes: 7

Tim Dorr
Tim Dorr

Reputation: 4921

In theory, you should be able to use ulimit for this purpose. However, I've personally never gotten it to work.

Upvotes: 2

Related Questions