MrE
MrE

Reputation: 20788

Is there a way to use `script` with shell functions? (colorized output)

I'm using a script to run several tests (npm, python etc...)

These have colored outputs.

I'm actually running some of these tests in parallel sending processes in the background, and capturing output in a variable to display when done (as opposed to letting the output come to TTY and having multiple outputs mixed up together).

All works well, but the output is not colored, and I would like to keep the colors. I understand it is because it is not an output to a TTY so color is stripped, and I looked for tricks to avoid this.

This answer: Can colorized output be captured via shell redirect?

offers a way to do this, but doesn't work with shell functions

If I do:

OUTPUT=$(script -q /dev/null npm test | cat)
echo -e $OUTPUT

I get the output in the variable and the echo command output is colored.

but if f I do:

function run_test() { npm test; }
OUTPUT=$(script -q /dev/null run_test | cat)
echo -e $OUTPUT

I get:

script: run_test: No such file or directory

If I call the run_test function passing it to script like:

function run_test() { npm test; }
OUTPUT=$(script -q /dev/null `run_test` | cat)
echo -e $OUTPUT

it's like passing the output that is already eval'd without the colors, so the script output is not colored.

Is there a way to make shell functions work with script ?

I could have the script call in the function like:

function run_test() { script -q /dev/null npm run test | cat; }

but there are several issues with that:

PS: I also tried npm config set color always to force npm to always output colors, but that doesn't seem to help, plus I have other functions to call that are not all npm, so it would not work for everything anyways.

Upvotes: 4

Views: 304

Answers (2)

Charles Duffy
Charles Duffy

Reputation: 295472

You can use a program such as unbuffer that simulates a TTY to get color output from software whose output is actually eventually going to a pipeline.

In the case of:

unbuffer npm test | cat

...there's a TTY simulated by unbuffer, so it doesn't see the FIFO going to cat on its output.

If you want to run a shell function behind a shim of this type, be sure to export it to the environment, as with export -f.


Demonstrating how to use this with a shell function:

myfunc() { echo "In function"; (( $# )) && { echo "Arguments:"; printf ' - %s\n' "$@"; }; }
export -f myfunc
unbuffer bash -c '"$@"' _ myfunc "Argument one" "Argument two"

Upvotes: 2

MrE
MrE

Reputation: 20788

I tried unbuffer and it doesn't seem to work with shell functions either

script doesn't work by passing it a shell function, however it's possible to pass some STDIN type input, so what ended up working for me was

script -q /dev/null <<< "run_test"

or

echo "run_test" | script -q /dev/null

so I could output this to a shell variable, even using a variable as the COMMAND like:

OUTPUT=$(echo "$COMMAND" | script -q /dev/null)

and later output the colored output with

echo -e $OUTPUT

Unfortunately, this still outputs some extra garbage (i.e. the shell name, the command name and the exit command at the end.

Since I wanted to capture the output code, I could not pipe the output somewhere else, so I went this way:

run() {
    run_in_background "$@" &
}    

run_in_background() {    
        COMMAND="$@" # whatever is passed to the function    
        CODE=0

        OUTPUT=$(echo "$COMMAND" | script -q /dev/null) || CODE=$(( CODE + $? ));

        echo -e $OUTPUT | grep -v "bash" | grep -v "$COMMAND";
        if [ "$CODE" != "0" ]; then exit 1; fi
    }

and use like:

# test suites shell functions
run_test1() { npm test; }
run_test2() { python manage.py test; }

# queue tests to run in background jobs
run run_test1
run run_test2
# wait for all to finish
wait

I'm skipping the part where I catch the errors and propagate failure to the top PID, but you get the gist.

Upvotes: 0

Related Questions