Reputation: 309
I see a lot of discussion regards static locking in a website to write to a single file for logging.
There are some who say never use a static lock and others who say it is clearly called for in such an instance. The naysayers argue: What if the IIS application pool is recycled? Then the static lock is lost and a file write error could occur.
My specific ask: If your population of users is reasonably small (n < 1000), and you have a single line of code inside your lock that executes the file write of say < 500 characters, is this an astronomically improbable issue with which to be concerned?
And if is an issue of any magnitude, then what is the simplest path of improvement to avoid this rare IIS recycle-static lock error? Would a simple try/catch on the write even "catch" the multiple file access in such a case?
Upvotes: 0
Views: 982
Reputation: 52230
Use FileShare.ReadWrite and useAsync = false
Access the file by creating a FileStream using this constructor (don't use File.Open
) and specify the following arguments:
var stream = new FileStream(path,
FileMode.Append,
FileAccess.Write,
FileShare.ReadWrite,
4096,
false);
The important arguments to note are FileShare.ReadWrite
and useAsync = false
.
FileShare.ReadWrite: Allows subsequent opening of the file for reading or writing. If this flag is not specified, any request to open the file for reading or writing (by this process or another process) will fail until the file is closed. However, even if this flag is specified, additional permissions might still be needed to access the file.
useAsync: Specifies whether to use asynchronous I/O or synchronous I/O. However, note that the underlying operating system might not support asynchronous I/O, so when specifying true, the handle might be opened synchronously depending on the platform. When opened asynchronously, the BeginRead and BeginWrite methods perform better on large reads or writes, but they might be much slower for small reads or writes. If the application is designed to take advantage of asynchronous I/O, set the useAsync parameter to true. Using asynchronous I/O correctly can speed up applications by as much as a factor of 10, but using it without redesigning the application for asynchronous I/O can decrease performance by as much as a factor of 10.
By using these parameters, you obtain a file handle that will allow other processes to access the file in parallel. Meanwhile your writes will be synchronous which will prevent your output from being split in half by the other process. There is still locking, but it's handled by the underlying O/S and will be transparent to you and any competing process.
Add a lock
If it makes you feel better, you can wrap it in a lock:
lock (lockObject)
{
using (var stream = new FileStream(path, FileMode.Append, FileAccess.Write, FileShare.ReadWrite, 4096, false))
{
var writer = new TextWriter(stream);
writer.Write(message);
}
}
Note that the lock
protects you against competing threads, but not competing processes. If you have a worker thread that handles the log writes (e.g. with queue and producer/consumer pattern), you probably don't need it, and it adds unnecessary overhead. But if you are writing to the log directly from the web worker threads, you do need it.
Cross-process mutex
The above ought to be pretty darned safe, even during an app pool recycle. At least I've never had any problems. But.. if you are really paranoid and your logging is mission critical, you could use a named mutex for locking that crosses process boundaries.
var mutex = new Mutex(false, "MyLoggingMutex");
try
{
mutex.WaitOne();
using (var stream = new FileStream(path, FileMode.Append, FileAccess.Write, FileShare.ReadWrite, 4096, false))
{
var writer = new TextWriter(stream);
writer.Write(message);
}
}
finally
{
mutex.ReleaseMutex();
}
There's quite a bit of overhead with this sort of thing, so I probably would not use it unless logging were mission critical, e.g. you're dumping auditable data to the log file that might be used in support of non-repudiation (e.g. to prove who did what so you don't get sued, that sort of thing). But if that were the case honestly I'd stick with a database, which makes the problem trivial to solve.
Upvotes: 2