Reputation: 24981
I have a small script, which is called daily by crontab using the following command:
/homedir/MyScript &> some_log.log
The problem with this method is that some_log.log is only created after MyScript finishes. I would like to flush the output of the program into the file while it's running so I could do things like
tail -f some_log.log
and keep track of the progress, etc.
Upvotes: 121
Views: 168722
Reputation: 481
I had a similar problem where a redirect was sometimes buffering.
I couldnt easily use stdbuf cos my command was a bash function. You had to export the function in this case which was too much work.
My workaround was touching the file prior to it being used.
eg.
mybashfunction >> ${mybufferedfile}
touch ${mybufferedfile} <--- I needed to add this
diff mybufferedfile identical.output.txt
Without the touch command, the file was buffered, sometimes. And the diff found differences cos of the incomplete buffered file.
Upvotes: 0
Reputation: 461
script -c <PROGRAM> -f OUTPUT.txt
Key is -f. Quote from man script:
-f, --flush
Flush output after each write. This is nice for telecooperation: one person
does 'mkfifo foo; script -f foo', and another can supervise real-time what is
being done using 'cat foo'.
Run in background:
nohup script -c <PROGRAM> -f OUTPUT.txt
Upvotes: 46
Reputation: 1803
I found a solution to this here. Using the OP's example you basically run
stdbuf -oL /homedir/MyScript &> some_log.log
and then the buffer gets flushed after each line of output. I often combine this with nohup
to run long jobs on a remote machine.
stdbuf -oL nohup /homedir/MyScript &> some_log.log
This way your process doesn't get cancelled when you log out.
Upvotes: 125
Reputation: 46796
Would this help?
tail -f access.log | stdbuf -oL cut -d ' ' -f1 | uniq
This will immediately display unique entries from access.log using the stdbuf utility.
Upvotes: 6
Reputation: 3684
alternative to stdbuf is awk '{print} END {fflush()}'
I wish there were a bash builtin to do this.
Normally it shouldn't be necessary, but with older versions there might be bash synchronization bugs on file descriptors.
Upvotes: 0
Reputation: 13475
Thanks @user3258569
, script is maybe the only thing that works in busybox
!
The shell was freezing for me after it, though. Looking for the cause, I found these big red warnings "don't use in a non-interactive shells" in script manual page:
script
is primarily designed for interactive terminal sessions. When stdin is not a terminal (for example:echo foo | script
), then the session can hang, because the interactive shell within the script session misses EOF andscript
has no clue when to close the session. See the NOTES section for more information.
True. script -c "make_hay" -f /dev/null | grep "needle"
was freezing the shell for me.
Countrary to the warning, I thought that echo "make_hay" | script
WILL pass a EOF, so I tried
echo "make_hay; exit" | script -f /dev/null | grep 'needle'
and it worked!
Note the warnings in the man page. This may not work for you.
Upvotes: 2
Reputation: 3482
You can use tee
to write to the file without the need for flushing.
/homedir/MyScript 2>&1 | tee some_log.log > /dev/null
Upvotes: 8
Reputation: 46
Buffering of output depends on how your program /homedir/MyScript
is implemented. If you find that output is getting buffered, you have to force it in your implementation. For example, use sys.stdout.flush() if it's a python program or use fflush(stdout) if it's a C program.
Upvotes: 3
Reputation: 2818
How just spotted here the problem is that you have to wait that the programs that you run from your script finish their jobs.
If in your script you run program in background you can try something more.
In general a call to sync
before you exit allows to flush file system buffers and can help a little.
If in the script you start some programs in background (&
), you can wait that they finish before you exit from the script. To have an idea about how it can function you can see below
#!/bin/bash
#... some stuffs ...
program_1 & # here you start a program 1 in background
PID_PROGRAM_1=${!} # here you remember its PID
#... some other stuffs ...
program_2 & # here you start a program 2 in background
wait ${!} # You wait it finish not really useful here
#... some other stuffs ...
daemon_1 & # We will not wait it will finish
program_3 & # here you start a program 1 in background
PID_PROGRAM_3=${!} # here you remember its PID
#... last other stuffs ...
sync
wait $PID_PROGRAM_1
wait $PID_PROGRAM_3 # program 2 is just ended
# ...
Since wait
works with jobs as well as with PID
numbers a lazy solution should be to put at the end of the script
for job in `jobs -p`
do
wait $job
done
More difficult is the situation if you run something that run something else in background because you have to search and wait (if it is the case) the end of all the child process: for example if you run a daemon probably it is not the case to wait it finishes :-).
Note:
wait ${!} means "wait till the last background process is completed" where $!
is the PID of the last background process. So to put wait ${!}
just after program_2 &
is equivalent to execute directly program_2
without sending it in background with &
From the help of wait
:
Syntax
wait [n ...]
Key
n A process ID or a job specification
Upvotes: 0
Reputation: 6216
I had this problem with a background process in Mac OS X using the StartupItems
. This is how I solve it:
If I make sudo ps aux
I can see that mytool
is launched.
I found that (due to buffering) when Mac OS X shuts down mytool
never transfers the output to the sed
command. However, if I execute sudo killall mytool
, then mytool
transfers the output to the sed
command. Hence, I added a stop
case to the StartupItems
that is executed when Mac OS X shuts down:
start)
if [ -x /sw/sbin/mytool ]; then
# run the daemon
ConsoleMessage "Starting mytool"
(mytool | sed .... >> myfile.txt) &
fi
;;
stop)
ConsoleMessage "Killing mytool"
killall mytool
;;
Upvotes: -3
Reputation:
well like it or not this is how redirection works.
In your case the output (meaning your script has finished) of your script redirected to that file.
What you want to do is add those redirections in your script.
Upvotes: -5
Reputation: 2960
bash itself will never actually write any output to your log file. Instead, the commands it invokes as part of the script will each individually write output and flush whenever they feel like it. So your question is really how to force the commands within the bash script to flush, and that depends on what they are.
Upvotes: 36
Reputation: 993005
This isn't a function of bash
, as all the shell does is open the file in question and then pass the file descriptor as the standard output of the script. What you need to do is make sure output is flushed from your script more frequently than you currently are.
In Perl for example, this could be accomplished by setting:
$| = 1;
See perlvar for more information on this.
Upvotes: 6