Reputation: 1239
Is it true that ulimit -u
sets an upper bound:
Partial answers seem to be available via inference from help ulimit
, man $(basename $SHELL)
, setrlimit(3)
, or from looking at the output of
$ ulimit -u 708 | ulimit -u
709
$
with some assumptions about a shell's (sub)process allocation mechanisms in piped commands. (That's a hard limit in the example above.) Is there a comprehensive resource, for study or reference, that actually focuses on Unix/POSIX resource management?
Elaborating on list item 1., consider two login shells with limits -u
of 200 and 100, respectively. Also a fork bomb
bomb() { # increment and output cnt, pipe to new process
cnt=$1; cnt=$((cnt + 1)); echo $cnt; sleep 1;
echo | bomb $cnt;
}
I run bomb
in the 200-processes-shell. Should I expect termination near 200 processes or near 100 processes, given a limit of 100 set in the other shell?
This is what I see:
$ bomb 1
2
3
...
196
197
-bash: fork: retry: No child processes
-bash: fork: retry: Resource temporarily unavailable
Upvotes: 4
Views: 1450
Reputation: 30823
- on the number of processes created descending from the process in which it is called?
Not only these processes, the limit affect all processes launched by the user (same uid)
- taking into account the number of processes already running with the same (effective, filesystem, real, saved?) user ID?
It is taking them into account, and more precisely, it is counting every single thread sharing the same user id.
- portably accross POSIX systems?
No guarantee. This is a bashism not defined in the POSIX shell standard which ulimit
only supports file size limit. It might or might not implemented depending on the underlying OS as there is no portable (POSIX) way to do it.
Upvotes: 3