Reputation: 14869
What negative/undefined behaviour could arise, from calling a save function (ala boost-serialize) within a class's ~dtor?
Upvotes: 3
Views: 346
Reputation: 279305
You have two concerns, one of which is a consequence of the other:
1) You should not allow any exception to escape the destructor. If you do, and if the destructor is being called as part of stack unwinding, then the runtime will terminate()
your program. This is not undefined behaviour, but it's pretty negative.
Because of this (and also of course because destructors don't return a value):
2) There's no reasonable way for your destructor to indicate success or failure ("reasonable" meaning, without building some kind of separate error-reporting system). Since the user of your class might want to know whether the save happened or not, preferably with a sensible API to do so, this means that destructors can only save data on a "best effort" basis. If the save fails then the object still gets destroyed, and so presumably its data is lost.
There is a strategy for such situations, used for example by file streams. It works like this:
flush()
(or in your case save()
) function that saves the dataclose()
. Catch any exceptions it can throw and ignore any errors.That way, users who need to know whether the save succeeded or not call save()
to find out. Users who don't care (or who wouldn't mind it succeeding if possible in the case that an exception is thrown and the object is destroyed as part of stack unwinding) can let the destructor try.
That is, your destructor can attempt to do something that might fail, as a last-ditch effort, but you should additionally provide a means for users to do that same thing "properly", in a way that informs them of success or failure.
And yes, this does incidentally mean that using streams without flushing them and checking the stream state for failure is not using them "properly", because you have no way of knowing whether the data was ever written or not. But there are situations where that's good enough, and in the same kinds of situation it might be good enough for your class to save in its destructor.
Upvotes: 7
Reputation: 18751
It is a bad idea.
So I just want to make one more point, what do you have to gain by serializing in the destructor.
you know the serialization will run even if there is an exception, if you are making use of RAII. But this isn't so much of a benefit because even though the destructor will run you can't guarantee the serialize will run since it throws (in this case at least). Also you lose a lot of the ability to properly handle a failure.
Upvotes: 2
Reputation: 3929
No ,it's not a bad idea ,but it isn't a terribly good idea either ! But sometimes it's right thing to do.
As long as you protect your destructor from throwing exceptions ,there is nothing against it.
Upvotes: 1
Reputation: 9843
The issue is that boost-serialize
can throw an exception. That means if the destructor is being called because an exception is propagating and is cleaning up the stack as it unwinds then your application will terminate if the destructor of the object throws another exception.
So to summarize, you always only want one exception propagating at a time. If you end up with more then one then your application will close which defeats the purpose of exceptions.
Upvotes: 3