Reputation: 3134
I'm trying to figure out why I'm seeing diminishing speed returns when backgrounding lots of processes in a Bash script. Something like:
function lolecho() {
echo "lol" &> /dev/null
}
c=0
while true; do
for i in $(seq 1 1000); do
lolecho &
((c+=1))
if [[ $((c%1000)) -eq 0 ]]; then
echo "Count $c"
fi
done
sleep .1
done
It screams out of the gate up to 10,000, 20,0000... but it then starts to slow down in how quickly it can put up backgrounded processes around 70,000... 80,0000. As in, the rate at which the count prints to screen slows down by a seemingly linear amount, depending on the total.
Should not the rate at which the machine can run background jobs that finish basically instantly be consistent, regardless of how many have been added and closed?
Upvotes: 0
Views: 357
Reputation: 33819
A bit long for a comment ... OP's solution of using the wait
command is fine but could probably be fine tuned a bit ...
As coded (in OPs answer):
For a more consistent throughput I'd want to:
wait -n
to start up a new process as soon as one finishesGranted, this may not make much difference for this simple example (lolecho()
) but if doing some actual work you should find you maintain a fairly steady workload.
A couple examples of using wait -n
: here and here - see 2nd half of answer
If using an older version of bash
that does not support the -n
flag, an example using a polling process
Upvotes: 0
Reputation: 3134
The answer was to use the Linux built-in wait command:
function lolecho() {
echo "lol" &> /dev/null
}
c=0
while true; do
for i in $(seq 1 1000); do
lolecho &
((c+=1))
if [[ $((c%1000)) -eq 0 ]]; then
echo "Count $c"
fi
done
wait # <------------
done
The script now produces processes consistently and faster in general.
Upvotes: 0