Reputation: 1708
I have read the questions and answers I could find about clearing the MemoryCache in C#. There were many recommendations, like: 1. Enumerate the cache, and remove all items - according to others, this is not good, getting the enumerator locks up the entire thing, and all kinds of apocalypse happens, quoting from a part of the documentation, I have not found, and showing a warning, I have failed to reproduce. Anyways, I don't think this is a very efficient solution.
Store the keys in a separate collection, and iterate that one, to remove the items from the cache. - Apart from this doesn't sound very thread safe, it doesn't sound very efficient also.
Dispose the old cache, and create a new one - This sounds good, the obvious problem that comes to mind, and is pointed out in several comments, that existing references to the old cache can bring up problems. Of course the order of actions matters, you have to save a reference for the old one, create a new one in place of the old one, and dispose the old one - not everyone seemed to notice this little nuance.
So what now? One of my colleagues suggested using the MemoryCache for a problem of mine, and he wrapped the cache object in another class, wich has the option to get a key (loading it up from db if needed), remove a key, or clear the cache. The first two are not important right now, the third one is interesting. I have used the third solution for this, along the lines of "it is guaranteed, to not have any additional references to the MemoryCache object apart from my own implementation's". So relevant code is:
Constructor:
public MyCache()
{
_cache = new MemoryCache("MyCache");
}
ClearCache:
public void ClearCacheForAllUsers()
{
var oldCache = _cache;
_cache = new MemoryCache("MyCache");
oldCache.Dispose();
}
_cache is the private MemoryCache object.
Can this cause problems in a multi-threaded environment? I have some concerns of calling read and dispose in parallel. Should I implement some kind of locking mechanism, wich allows concurrent reads, but causes the clear cache function to wait for current reads finishing on the cache?
My guess is yes, it is required to implement this, but I'd like to get some insight before getting to it.
Robert
EDIT 1 on Voo's answer: It is guaranteed, that noone outside of the "wrapper" (MyCache) will get a reference to the _cache object. My worry is something like the following:
T1:
MyCache.ClearAll()
var oldCache = _cache
T2:
MyCache.GetOrReadFromDb("stuff")
T1:
_cache=new MemoryCache("MyCache")
oldCache.Dispose()
T2:
*something bad*
Apart from the T2 thread still using the old cache, wich is not preferred, but something I can live with, can there be a situation where the Get method somehow accesses the old cache in a disposed state, or read the new one without the data?
Imagine the GetOrReadFromDb function something like this:
public object GetOrReadFromDb(string key)
{
if(_cache[key]==null)
{
//Read stuff from DB, and insert into the cache
}
return _cache[key];
}
I think that it is possible, that control is taken away from the reading thread, and given to the clearing, for example right after reading from db, and before returning with the _chache[key] value, and that can cause problems. Is it a real issue?
Upvotes: 3
Views: 3801
Reputation: 19107
The issue with your solution is that it introduces a possible race condition which would need ugly locks or other thread syncronization solutions.
Instead, use the built-in solution.
You can use a custom ChangeMonitor
class to clear your items from the cache. You can add a ChangeMonitor
instance to the ChangeMonitors
property of the CacheItemPolicy
you set when you call MemoryCache.Set
.
For example:
class MyCm : ChangeMonitor
{
string uniqueId = Guid.NewGuid().ToString();
public MyCm()
{
InitializationComplete();
}
protected override void Dispose(bool disposing) { }
public override string UniqueId
{
get { return uniqueId; }
}
public void Stuff()
{
this.OnChanged(null);
}
}
Usage example:
var cache = new MemoryCache("MyFancyCache");
var obj = new object();
var cm = new MyCm();
var cip = new CacheItemPolicy();
cip.ChangeMonitors.Add(cm);
cache.Set("MyFancyObj", obj, cip);
var o = cache.Get("MyFancyObj");
Console.WriteLine(o != null);
o = cache.Get("MyFancyObj");
Console.WriteLine(o != null);
cm.Stuff();
o = cache.Get("MyFancyObj");
Console.WriteLine(o != null);
o = cache.Get("MyFancyObj");
Console.WriteLine(o != null);
Upvotes: 2
Reputation: 30255
Never used the MemoryCache
class, but there's an obvious race condition going on. It's possible for a thread T1 to call GetElement
(or whatever you call it) while another thread T2 is executing ClearCacheForAllUsers
.
Then the following can happen:
T1:
reg = _cache
T2:
oldCache = _cache
_cache = new MemoryCache("MyCache")
oldCache.Dispose()
T1:
reg.Get() // Ouch bad! Dispose tells us this results in "unexpected behavior"
The trivial solution would be to introduce synchronization.
If locking is unacceptable, you could use some clever tricks, but the easiest solution would be to use Finalize
I'd say. Sure, generally a bad idea, but assuming that the only thing the cache does is use up memory, so it doesn't hurt for the cache to only be collected under memory pressure you avoid having to worry about race conditions by letting the GC worry about it. Note though that _cache
has to be volatile.
Another possible approach is that as an implementation detail Get
apparently always returns null
after the cache is disposed. In that case you could avoid locking most of the times except in these rare situations. Although this is implementation defined and I wouldn't like to rely on such a small detail personally.
Upvotes: 0