Reputation: 31
I wanted a way to break the IDisposable
chain where some nested class that you suddenly depend on now implements IDisposable
and you don't want that interface to ripple up the layers of your composite. Basically, I have weak subscriptions to IObservable<T>
's via 'SubscribeWeakly()' that I want to clean up when I go away to not leak the wrapper instances just in case the Observable
never fires. That was the motivation, but I use it for other stuff as well.
Another post had a similar problem, and the answer basically stated that you can still access the disposables in your finalizer. However, you're not guaranteed what order the finalizers are run in, so disposing might be problematic.
Therefore, I needed a way to guarantee that the disposable is kept alive so I can call Dispose()
in my finalizer. So I looked at GCHandle
, which allows C++ to hold (and keep alive) managed objects by pulling them and their aggregates into an application handle to keep them alive until the handle is free'd and the composite lifetime returns to the control of .NET's memory manager. Coming from C++, I thought behavior similar to std::unique_ptr
would be good so I came up with something similar to AutoDisposer
.
public class AutoDisposer
{
GCHandle _handle;
public AutoDisposer(IDisposable disposable)
{
if (disposable == null) throw new ArgumentNullException();
_handle = GCHandle.Alloc(disposable);
}
~AutoDisposer()
{
try
{
var disposable = _handle.Target as IDisposable;
if (disposable == null) return;
try
{
disposable.Dispose();
}
finally
{
_handle.Free();
}
}
catch (Exception) { }
}
}
In the class that needs to dispose resources when it goes away, I would just assign a field like _autoDisposables = new AutoDisposer(disposables)
. This AutoDisposer
would then get cleaned up by the garbage collector around them same time the containing class does. However, I'm wondering what the issues would be with this technique. Right now I can think of the following:
Therefore, I use it sparingly when implementing IDisposable
isn't too much of a burden, if I need to deterministically call Dispose()
, or etc.
Does anybody see any other issues? Is this technique even valid?
Upvotes: 3
Views: 855
Reputation: 81115
It is possible to design classes so that they can coordinate their finalization behavior with each other. For example, an object could accept a constructor parameter of type Action(bool)
, and specify that if non-null it will be called as the first step of Dispose(bool)
[the backing field could be be read with Interlocked.Exchange(ref theField, null)
to ensure the delegate gets invoked at most once]. If a class that e.g. encapsulates a file included such a feature, and was wrapped in a class which encapsulates the file with extra buffering, the file would notify the buffering class that it was about to close, and the buffering class could thus ensure that all necessary data was written. Unfortunately, such a pattern is not common in the framework.
Given the lack of such a pattern, the only way that a class which encapsulates a buffered file could ensure that it would manage to write out its data if it's abandoned, without the file getting closed before it can do so, would be to persist a static reference somewhere to the file (perhaps using a static instance of ConcurrentDictionary(bufferedWrapper, fileObject)
) and ensure that when it is cleaned up, it will destroy that static reference, write its data to the file, and then close the file. Note that this approach should only be used if the wrapper object keeps exclusive control over the object that it wraps, and it requires extreme attention to detail. Finalizers have many weird corner cases, it's hard to handle them all properly, and any failure to handle obscure corner cases correctly is likely to result in Heisenbugs.
PS Continuing along the ConcurrentDictionary
approach, if you're using something like events your main concerns are apt to be (1) ensuring that if an object is abandoned the events don't hold a reference to anything "big"; (2) ensuring that the number of abandoned objects in the ConcurrentDictionary
cannot grow without bound. The first issue can be handled by ensuring that there is no "strong" reference path from an event to any significant "forest" of interconnected objects; if a superfluous subscription only holds references to objects totalling 100 bytes or so, and they'll get cleared up if any of the events ever fire, even a thousand abandoned subscriptions would represent a very minor problem [provided the number is bounded]. The second issue can be handled by having each subscription requests poll some items in the dictionary (either on a per-request or amortized basis) to see if they're abandoned and clean them up if so. If some events are abandoned and never fire, and if no new events of that type are ever added, those events may stick around indefinitely but they'll be harmless. The only way the events could be significantly harmful would be if they held reference to big objects (which can be avoided using weak references), an unbounded number of events could be added and abandoned without ever getting cleaned up (which won't happen if adding new events causes abandoned ones to get cleaned up), or if such events could waste CPU time continuously (which won't happen if the first attempt to fire them after the objects that cared about them are gone would cause them to get cleaned up).
Upvotes: 3
Reputation: 38130
I think you might've misunderstood IDispoable
- the typical pattern on an IDisposable
object is loosely:
void Dispose() { Dispose(true); }
void Dispose(bool disposing)
{
if (disposing)
{
// Free managed resources
}
// always free unmanaged resources
}
~Finalizer() { Dispose (false); }
Because the finalizer should always handle unmanaged resources, if you wait for it to run (which will be at some point in the future when memory constraints trigger the garbage collection, or it's triggered manually), you shouldn't leak. If you want to be deterministic about when those resources are freed, then you will have to expose IDispoable
down your class hierarchy.
Upvotes: 3