Gordon
Gordon

Reputation: 6863

Variations on file locking

I started a thread over here, to ask about "concurrent" writes to an XML file, and it got flagged as a duplicate and referenced here, to a thread that talks about creating a lock file in the same folder as the write file, as a means of handling the situation. This seems inelegant to me, to be writing to the network with a hidden file, especially when we have the ability to lock a file, just not the ability (it seems) to lock a file and then, you know, do something with it. So, my thought is to take a different approach. 1: Lock the file with $file = [IO.File]::Open($path, 'Open', 'ReadWrite', 'None') I have verified I can't lock it twice, so only one instance of my code can have a lock at any one time. 2: Copy-Item to the local temp folder. 3: Read that copy and append data as needed. 4: Save back over the temp file. 5: Remove the lock with $file.Close() 6: Immediately Copy-Item the temp file back over the original. The risk seems to be between 5 & 6. Another instance could set a lock after the first instance removes the lock, but before it overwrites the file with the revised temp file.

Is that risk the reason for the separate lock file approach? Because then the "lock" stays in place until after the revisions are saved?

It all seems like so much nasty kludge for something that I would think .NET or Powershell should handle. I mean, a StreamReaderWriter that has a -lock parameter, and allows you to pull the file in, mess with it, and save it, just seems so basic and fundamental I can't believe it's something that isn't built in.

Upvotes: 1

Views: 60

Answers (2)

mklement0
mklement0

Reputation: 438273

Mike Robinson's helpful answer sums up best practices well.

As for your question:

The risk seems to be between 5 & 6. Another instance could set a lock after the first instance removes the lock, but before it overwrites the file with the revised temp file.

Is that risk the reason for the separate lock file approach? Because then the "lock" stays in place until after the revisions are saved?

Yes, a separate lock file that cooperating processes respect will prevent such race conditions.

However, it is possible to solve this problem without a lock file, albeit at the expense of how long it takes to perform updates:

  • A lock file allows you to "claim" the file without yet locking it exclusively, so you can prepare the update while other processes can still read the file. You can then limit exclusive locking to the act of rewriting / replacing the file using previously prepared content.

    • In other words: by convention, the lock file guarantees the atomicity of the update operation, but minimizes its duration by limiting exclusive locking to just the act of rewriting / replacing (excluding the time spent on reading and update preparation).
  • Alternatively, you can guarantee atomicity by opening the file with an exclusive lock that you don't release until you've read, modified, and saved back.

    • In other words: The implementation becomes simpler (no lock file), but updates take longer.

      • This answer to your other question demonstrates the technique.
    • Even then, however, you need cooperation from the the other processes: that is, both readers and writers need to be prepared to retry in a loop if opening the file for reading fails (temporarily) during an ongoing update operation.

Upvotes: 0

Mike Robinson
Mike Robinson

Reputation: 8945

A practice that is often used – by mutual cooperation of all applications – is a "sentinel file" or "lock file." Sometimes the mere presence of the file is enough; sometimes it becomes "the file that you lock."

All of the applications must understand your convention and must respect it. This will allow you to manipulate the XML file without interference by other cooperating applications.

Upvotes: 1

Related Questions