Reputation: 7594
I've created a small Bash script that does a MySQL data dump. Because the dump can be fairly large I put the process in the background and then wait for either an error or the log to show up in the file system. I have the following code:
mysqldump main_db > /loc/to/dmp/file.sql 2>/loc/to/error/log/file.log &
The problem is that I get a '/loc/to/error/log/file.log' file the size of 0 (which I presume means no real error) sometimes when this command is run, which kills the process, even though there is no error.
I'm not sure why the STDERR would write a file when there was no data to write. Is this because of the &
background process?
Upvotes: 1
Views: 497
Reputation: 1622
A possible easy workaround:
mysqldump main_db > file.sql 2> errors.log ; [ -s errors.log ] || rm -f errors.log
OR
(short readable easily adjustable script with timestamp)
OUTPUT="/loc/to/dmp/`date +%F.%T`.sql"
ERRORS="$OUTPUT.ERRORS"
mysqldump main_db > $OUTPUT 2> $ERRORS
[ -s $ERRORS ] || rm -f $ERRORS
Upvotes: 0
Reputation: 229108
The redirected files are set up before your script is executed by the executing shell.
I.e. after having parsed your command which includes redirected stdout/stderr, the shell forks, opens(creates the files if they don't exists). attach the opened filedescriptors to filedescriptor 1 and 2 (stdout/err respectivly) and then executes the actual command.
Upvotes: 5
Reputation: 41378
The redirection file is created whether or not any data is ever written to it. Whichever process is watching the error log should check for non-zero filesize, not existence.
Upvotes: 0