Reputation: 861
I have to run a Bash command. But this command will take a few minutes to run.
If I execute this command normally (synchronously), my application will hang until the command is finished running.
How do I run Bash commands asynchronously from a Perl script?
Upvotes: 3
Views: 5360
Reputation: 79536
The normal way to do this is with fork
. You'll have your script fork, and the child would then call either exec
or system
on the Bash script (depending on whether the child needs to handle the return code of the Bash script, or otherwise interact with it).
Then your parent would probably want a combination of wait
and/or a SIGCHILD
handler.
The exact specifics of how to handle it depend a lot on your situation and exact needs.
Upvotes: 3
Reputation: 7610
If you do not care about the result, you can just use system("my_bash_script &");
. It will return immediately and the script does what is needed to be done.
I have two files:
$ cat wait.sh
#!/usr/bin/bash
for i in {1..5}; { echo "wait#$i"; sleep 1;}
$cat wait.pl
#!/usr/bin/perl
use strict; use warnings;
my $t = time;
system("./wait.sh");
my $t1 = time;
print $t1 - $t, "\n";
system("./wait.sh &");
print time - $t1, "\n";
Output:
wait#1
wait#2
wait#3
wait#4
wait#5
5
0
wait#1
wait#2
wait#3
wait#4
wait#5
It can be seen that the second call returns immediately, but it keeps writing to the stdout.
If you need to communicate to the child then you need to use fork
and redirect STDIN
and STDOUT
(and STDERR
). Or you can use the IPC::Open2 or IPC::Open3 packages. Anyhow, it is always a good practice to wait for the child to exit before the caller exits.
If you want to wait for the executed processes you can try something like this in Bash:
#!/usr/bin/bash
cpid=()
for exe in script1 script2 script3; do
$exe&
cpid[$!]="$exe";
done
while [ ${#cpid[*]} -gt 0 ]; do
for i in ${!cpid[*]}; do
[ ! -d /proc/$i ] && echo UNSET $i && unset cpid[$i]
done
echo DO SOMETHING HERE; sleep 2
done
This script at first launches the script# asynchronously and stores the pids in an array called cpid
. Then there is a loop; it browses that they are still running (/proc/ exists). If one does not exist, text UNSET <PID>
is presented and the PID is deleted from the array.
It is not bulletproof as if DO SOMETHING HERE
part runs very long, then the same PID can be added to another process. But it works well in the average environment.
But this risk also can be reduced:
#!/usr/bin/bash
# Enable job control and handle SIGCHLD
set -m
remove() {
for i in ${!cpid[*]}; do
[ ! -d /proc/$i ] && echo UNSET $i && unset cpid[$i] && break
done
}
trap "remove" SIGCHLD
#Start background processes
cpid=()
for exe in "script1 arg1" "script2 arg2" "script3 arg3" ; do
$exe&
cpid[$!]=$exe;
done
#Non-blocking wait for background processes to stop
while [ ${#cpid[*]} -gt 0 ]; do
echo DO SOMETHING; sleep 2
done
This version enables the script to receive the SIGCHLD signal when an asynchronous sub process exited. If SIGCHLD is received, it asynchronously looks for the first non-existent process. The waiting while-loop is reduced a lot.
Upvotes: 3
Reputation: 50637
You can use threads
to start Bash asynchronously,
use threads;
my $t = async {
return scalar `.. long running command ..`;
};
and later manually test if thread is ready to join, and get output in a non-blocking fashion,
my $output = $t->is_joinable() && $t->join();
Upvotes: 3