Reputation: 3353
I am using a bash script that calls multiple processes which have to start up in a particular order, and certain actions have to be completed (they then print out certain messages to the logs) before the next one can be started. The bash script has the following code which works really well for most cases:
tail -Fn +1 "$log_file" | while read line; do
if echo "$line" | grep -qEi "$search_text"; then
echo "[INFO] $process_name process started up successfully"
pkill -9 -P $$ tail
return 0
elif echo "$line" | grep -qEi '^error\b'; then
echo "[INFO] ERROR or Exception is thrown listed below. $process_name process startup aborted"
echo " ($line) "
echo "[INFO] Please check $process_name process log file=$log_file for problems"
pkill -9 -P $$ tail
return 1
fi
done
However, when we set the processes to print logging in DEBUG mode, they print so much logging that this script cannot keep up, and it takes about 15 minutes after the process is complete for the bash script to catch up. Is there a way of optimizing this, like changing 'while read line' to 'while read 100 lines', or something like that?
Upvotes: 7
Views: 1348
Reputation: 8223
How about not forking up to two grep
processes per log line?
tail -Fn +1 "$log_file" | grep -Ei "$search_text|^error\b" | while read line; do
So one long running grep
process shall do preprocessing if you will.
Edit: As noted in the comments, it is safer to add --line-buffered
to the grep
invocation.
Upvotes: 4
Reputation: 58778
Some tips relevant for this script:
grep ... <<<"$line"
to execute fewer echo
s.tail -f | grep -q ...
to avoid the while
loop by stopping as soon as there's a matching line.-i
on grep
it might be significantly faster to process the input.kill -9
.Upvotes: 0