Reputation: 4388
We have a heavily used .Net 3.5 application that reads "expensive to create" data and caches it. However we are getting a lot of errors around both reading the cache file and writing to the cache file. Some of the advice that I have got from StackOverflow forums is to:
Is this a correct way of reading and writing files? Please advise.
private XmlDocument ReadFromFile(string siteID, Type StuffType)
{
XmlDocument result = null;
var fsPath = FileSystemPath(siteID, StuffType.Name);
result = new XmlDocument();
using (var streamReader = new StreamReader(fsPath))
//using (var fileStream = new FileStream(fsPath, FileMode.Open, FileAccess.Read, FileShare.Read))
{
result.Load(streamReader);
}
//GC.Collect();
return result;
}
private readonly object thisObject = new object();
private void WriteToFile(string siteID, XmlDocument stuff, string fileName)
{
var fsPath = FileSystemPath(siteID, fileName);
lock (thisObject)
{
//using (var fileStream = new FileStream(fsPath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
using (var streamWriter = new StreamWriter(fsPath))
{
stuff.Save(streamWriter);
}
//GC.Collect();
}
}
Upvotes: 1
Views: 8489
Reputation: 12954
If you want synchronize access to resource there are several options, depending on the context. There a several (generic) situations:
Single process, single thread
No synchronization required.
Single process, multiple threads
Use a simple locking mechanism like lock
or ReaderWriterLockSlim
.
Multiple processes, single machine
Use a (named) Mutex
. A Mutex is not very fast. More about performance at the bottom.
Multiple processes, multiple machines
Now it starts getting interested. .NET does not have an in-the-box solution for this. I can think of two solutions:
In this case
Of course this last solution is a generic solution. In the case of Ajit Goel it would be as simple as creating a centralized service to read/write files. Then you have one filemaster which is in control of iets files.
Another solution could be to store all your files inside a central database and let him do all the synchronization.
Performance
If performance starts to be an issue, you could try to create a cache.
Upvotes: 7
Reputation: 15397
I think that the ReaderWriterLock in combination with FileShare.ReadWrite is your solution. (Note, the document page I'm linking you to refers you to a better version called ReaderWriterLockSlim, which should be at least as good.)
You need FileShare.ReadWrite on each thread, so they can all access it however necessary. Any time a thread needs to read, have it AcquireReaderLock
(and ReleaseReaderLock
when the read is complete).
When you want to write, just use UpgradeToWriterLock
, and when you're done, DowngradeFromWriterLock
.
This should let all your threads access the file read-only at all times, and let any one thread grab the access to write whenever necessary.
Hope that helps!
Upvotes: 1
Reputation: 38367
I believe everyone has to open the file in FileShare.ReadWrite mode.
If someone opens it in FileShare.Read mode, and someone else tries to write to it, it will fail. I don't think they are compatible, because one is saying "share for read only", but the other wants to write. You may need to use FileShare.ReadWrite on all of them, OR minimize the amount of writing to minimize the conflicts.
http://msdn.microsoft.com/en-us/library/system.io.fileshare.aspx
Another option is to use FileShare.Read, and copy the file when making modifications. If there is a single entry point for all modifications, then it can use FileShare.Read to copy the file. Modify the copy, and then update some sort of variable/property that indicates the current file path. Once that is updated, all other processes reading from the file would use that new location. This would allow the modification to occur and complete, and then make all of the readers aware of the new modified file. Again, only viable if you can centralize the modifications. Maybe through some sort of modification request queue if necessary.
Upvotes: 0