Rusty Lemur
Rusty Lemur

Reputation: 1885

bash flock an output file

If several processes may be writing to the same output file, is it safe to flock the output file itself instead of a separate lock file?

E.g. is this safe?

outputFile=output.dat
exec 200>>"$outputFile"
flock -e 200
grep -i error "$1" >> "$outputFile"
flock -u 200

All of the examples I've found with flock use a separate lock file.

E.g.

outputFile=output.dat
lockFile=/var/tmp/output.dat
exec 200>"$lockFile"
flock -e 200
grep -i error "$1" >> "$outputFile"
flock -u 200

Upvotes: 1

Views: 756

Answers (1)

Charles Duffy
Charles Duffy

Reputation: 295650

Yes, what you're proposing is safe, within the specific (narrow) usage pattern given.

Things You Can Safely Do With A Single File

  • Open the file for append only without already holding the lock.
  • Truncate the file only after the lock is held
  • Modify the file while the lock is held in a manner that doesn't change which inode the directory entry refers to.

Things You Can Only Do With Two Separate Files

  • Initially open the lockfile with O_TRUNC.
  • Use the create-and-rename pattern to atomically modify the data file while the lock is held.
  • Delete the data file entirely, as with rm -- ensuring that any newly-created version gets a different inode -- while holding the lock.

Upvotes: 1

Related Questions