Reputation: 7246
I have a bash script that parallelise some time-consuming commands and so far it runs perfectly. I am using wait command as follows:
docker pull */* &
docker pull */* &
docker pull */* &
docker pull */* &
docker pull */* &
docker pull */* &
docker pull */* &
composer install -n &
wait
Now I want this script to abort all commands and give exit code if one of the commands fail. How to achieve this?
Note: * / * are docker image names, not important for the context
Upvotes: 1
Views: 3305
Reputation: 89
echo 0 > status
(
(docker pull */* || echo 1 > status) &
(docker pull */* || echo 1 > status) &
(docker pull */* || echo 1 > status) &
(docker pull */* || echo 1 > status) &
(docker pull */* || echo 1 > status) &
wait
) &
while true; do
[ $(cat status) -eq 0 ] || break
sleep 1s
done &
wait -n
or if you want to get rid of the while loop:
mkfifo status
(
(docker pull */* || echo 1 > status) &
(docker pull */* || echo 1 > status) &
(docker pull */* || echo 1 > status) &
(docker pull */* || echo 1 > status) &
(docker pull */* || echo 1 > status) &
wait
echo 0 > status
) &
cat status &
wait -n
rm -f status
Upvotes: 0
Reputation: 252
If the returned value of a command is 0, it indicates success, others indicate error. So you can create a function, and call it before each command. (This will only work if you remove &)
valid() {
if "$@"; then
return
else
exit 1
fi
}
valid docker pull */*
valid docker pull */*
valid docker pull */*
valid docker pull */*
valid docker pull */*
valid docker pull */*
valid docker pull */*
valid composer install -n
wait
Another alternative would be to put
set -e
at the beginning of your script.
This will cause the shell to exit immediately if a simple command exits with a nonzero exit value.
If you have your own Docker Registry, you don't need to pull the images in parallel. Docker Registry 2.0, which works with Docker 1.7.0 and above, downloads the image(s) layers in parallel, which makes the procedure much faster, so you don't have to pull all your images simultaneously.
Upvotes: -2
Reputation: 44043
This will require bash for wait -n
.
The trick here is to keep a list of the subprocesses you spawned, then wait for them individually (in the order they finish). You can then check the return code of the process that finished and kill the lot of them if it failed. For example:
#!/bin/bash
# Remember the pid after each command, keep a list of them.
# pidlist=foo could also go on a line of its own, but I
# find this more readable because I like tabular layouts.
sleep 10 & pidlist="$!"
sleep 10 & pidlist="$pidlist $!"
sleep 10 & pidlist="$pidlist $!"
sleep 10 & pidlist="$pidlist $!"
sleep 10 & pidlist="$pidlist $!"
sleep 10 & pidlist="$pidlist $!"
false & pidlist="$pidlist $!"
echo $pidlist
# $pidlist intentionally unquoted so every pid in it expands to a
# parameter of its own. Note that $i doesn't have to be the PID of
# the process that finished, it's just important that the loop runs
# as many times as there are PIDs.
for i in $pidlist; do
# Wait for a single process in the pid list to finish,
# check if it failed,
if ! wait -n $pidlist; then
# If it did, kill the lot. pipe kill's stderr
# to /dev/null so it doesn't complain about
# already-finished processes not existing
# anymore.
kill $pidlist 2>/dev/null
# then exit with a non-zero status.
exit 1
fi
done
Upvotes: 5