rsaw
rsaw

Reputation: 3537

bash: Possible to require double Ctrl-c to to exit a script?

End-goal: BASH script that is waiting for background jobs to finish does not abort on first Ctrl-c; instead, it requires a second Ctrl-c to quit.

I'm well aware of how the BASH-builtin trap works. You can either:

  1. Use it to ignore a signal altogether (e.g., trap '' 2) ... or

  2. Use it to have arbitrary commands executed before a signals original function is allowed to happen (e.g., trap cmd 2, where cmd is run before the parent script will be interrupted due to SIGINT)

So the question boils down to this:

How can I effectively combine 1 & 2 together, i.e., prevent the end-result a signal would lead to (1 -- e.g., stop script cancelling due to SIGINT) while also making that signal cause something else (2 -- e.g., increment a counter, check the counter and conditionally either print a warning or exit).

Put more simply:

How can I make a signal do something else entirely; not just insert a job before it does its thing.

Here's some example code to demonstrate what I'm aiming at; however, it of course doesn't work -- because trap can only do 1 or 2 from above.

#!/bin/bash
declare -i number_of_times_trap_triggered
cleanup_bg_jobs() {
    number_of_times_trap_triggered+=1
    if [[ ${number_of_times_trap_triggered} -eq 1 ]]; then
        echo "There are background jobs still running"
        echo "Hit Ctrl-c again to cancel all bg jobs & quit"
    else
        echo "Aborting background jobs"
        for pid in ${bg_jobs}; do echo "  Killing ${pid}"; kill -9 ${pid}; done
    fi
}
f() { sleep 5m; }
trap cleanup_bg_jobs 2
bg_jobs=
for job in 1 2 3; do
    f &
    bg_jobs+=" $!"
done
wait

So this is the output you end up getting when you press Ctrl-c once.

[rsaw:~]$ ./zax 
^CThere are background jobs still running
Hit Ctrl-c again to cancel all bg jobs & quit
[rsaw:~]$ ps axf|tail -6 
24569 pts/3    S      0:00 /bin/bash ./zax
24572 pts/3    S      0:00  \_ sleep 5m
24570 pts/3    S      0:00 /bin/bash ./zax
24573 pts/3    S      0:00  \_ sleep 5m
24571 pts/3    S      0:00 /bin/bash ./zax
24574 pts/3    S      0:00  \_ sleep 5m

Of course I could modify that to clean up the jobs on the first Ctrl-c, but that's not what I want. I want to stop BASH from quiting after the first trap is triggered ... until it's triggered a second time.

PS: Target platform is Linux (I couldn't care less about POSIX compliance) with BASH v4+

Upvotes: 9

Views: 4556

Answers (4)

jkgeyti
jkgeyti

Reputation: 2404

I had a slightly different use case, and wanted to leave the solution here, as Google led me to this topic. You can keep running a command, and allow the user to restart it with one CTRL+C, and kill it with double CTRL+C in the following manner:

trap_ctrlC() {
    echo "Press CTRL-C again to kill. Restarting in 2 second"
    sleep 2 || exit 1
}

trap trap_ctrlC SIGINT SIGTERM

while true; do  
    ... your stuff here ...
done

Upvotes: 11

Eric Siegerman
Eric Siegerman

Reputation: 11

  1. Use it to have arbitrary commands executed before a signals original function is allowed to happen (e.g., trap cmd 2, where cmd is run before the parent script will be interrupted due to SIGINT)

The italicized part of the above is incorrect. A trap handler is run instead of letting SIGINT (or whatever) interrupt the process. More accurately:

  • the default action of SIGINT (and of most, but not all, other signals) is to terminate the process
  • trap "command" SIGINT causes command to be run instead of (not as well as) the default action

So with your SIGINT handler installed, the SIGINT doesn't interrupt the entire script. But it does interrupt the wait command. When the trap handler finishes, the script resumes after the wait, i.e. it falls off the end and exits normally. You can see this by adding some debugging code:

echo Waiting
wait
echo Back from wait
exit 55                   # Arbitrary value that wouldn't otherwise occur

This version produces the following:

$ foo
Waiting
^CThere are background jobs still running
Hit Ctrl-c again to cancel all bg jobs & quit
back from wait
$ echo $?
55
$ 

What you need to do is repeat the wait after the handler returns. This version:

#!/bin/bash
declare -i number_of_times_trap_triggered
cleanup_bg_jobs() {
    number_of_times_trap_triggered+=1
    if [[ ${number_of_times_trap_triggered} -eq 1 ]]; then
        echo "There are background jobs still running"
        echo "Hit Ctrl-c again to cancel all bg jobs & quit"
    else
        echo "Aborting background jobs"
        for pid in ${bg_jobs}; do echo "  Killing ${pid}"; kill -9 ${pid}; done
        exit 1
    fi
}
f() { sleep 5m; }
trap cleanup_bg_jobs 2
bg_jobs=
for job in 1 2 3; do
    f &
    bg_jobs+=" $!"
done

while [ 1 ]; do
    echo Waiting
    wait
    echo Back from wait
done

does as you requested:

$ ./foo
Waiting
^CThere are background jobs still running
Hit Ctrl-c again to cancel all bg jobs & quit
Back from wait
Waiting
^CAborting background jobs
  Killing 24154
  Killing 24155
  Killing 24156
$ 

Notes:

  • I've left in the debugging stuff; obviously you'd remove it in production
  • The handler now does exit 1 after killing off the subprocesses. That's what breaks out of the infinite main loop

Upvotes: 0

rsaw
rsaw

Reputation: 3537

A colleague (Grega) just gave me a solution which ... well I can't believe I didn't think of it first.

"My approach would ... be to lay it off for long enough, possibly forever, using a function that just never returns or something (another wait?), so that the second handler can do its job properly."

For the record, wait would not work here. (Recursive.) However, adding a sleep command to my original code's cleanup_bg_jobs() function would take care of it .. but would lead to orphaned processes. So I leveraged process groups to ensure that all children of the script really do get killed. Simplified example for posterity:

#!/bin/bash
declare -i count=
handle_interrupt() {
    count+=1
    if [[ ${count} -eq 1 ]]; then
        echo "Background jobs still running"
        echo "Hit Ctrl-c again to cancel all bg jobs & quit"
        sleep 1h
    else
        echo "Aborting background jobs"
        pkill --pgroup 0
    fi
}
f() { tload &>/dev/null; }
trap handle_interrupt 2
for job in 1 2 3; do
    f &
done
wait

Upvotes: 1

qpfiffer
qpfiffer

Reputation: 121

I did something like this here and it mostly breaks down to this:

    ATTEMPT=0
handle_close() {
    if [ $ATTEMPT -eq 0 ]; then
        ATTEMPT=1
        echo "Shutdown."
    else
        echo "Already tried to shutdown. Killing."
        exit 0
    fi
}
trap handle_close SIGINT SIGTERM

You can set a variable in your handler that you can check again next time it is trapped.

Upvotes: 5

Related Questions