Reputation: 22814
Why do embedded platform developers continuosly attempt to remove usage C++ exceptions
from their SDKs
?
For example, Bada SDK
suggests the following workaround for the exception usage, which looks exceptionally ugly:
result
MyApp::InitTimer()
{
result r = E_SUCCESS;
_pTimer = new Timer;
r = _pTimer->Construct(*this);
if (IsFailed(r))
{
goto CATCH;
}
_pTimer->Start(1000);
if (IsFailed(r))
{
goto CATCH;
}
return r;
CATCH:
return r;
}
What are the reasons for this behavior?
As far as I know, ARM
compilers fully support C++ exceptions
and this couldn't actually be the matter. What else? Is the overhead of the exception usage and unwindings on ARM
platforms really that BIG to spend a lot time making such workarounds?
Maybe something else I'm not aware of?
Thank you.
Upvotes: 37
Views: 11791
Reputation: 69988
Modern C++ compiler can reduce the runtime usage of exception to as less as 3% of overhead. Still if the extreme programmer find it expensive then they would have resorted to such dirty tricks.
See here Bjarne Strourstrup's page for, Why use Exceptions ?
Upvotes: 4
Reputation: 10393
Just my 2 cents...
I consult exclusively on embedded systems, most of them hard real-time and/or safety/life critical. Most of them run in 256K of flash/ROM or less - in other words, these are not "PC-like" VME bus systems with 1GB+ of RAM/flash and a 1GHz+ CPU. They are deeply embedded, somewhat resource-constrained systems.
I would say at least 75% of the products which use C++ disable exceptions at the compiler (i.e., code compiled with compiler switches that disable exceptions). I always ask why. Believe it or not, the most common answer is NOT the runtime or memory overhead / cost.
The answers are usually some mix of:
Also - there is often some nebulous uncertainty/fear about overhead, but almost always it's unquantified / unprofiled, it's just kind of thrown out there & taken at face value. I can show you reports / articles that state that the overhead of exceptions is 3%, 10%-15%, or ~30% - take your pick. People tend to quote the figure that forwards their own viewpoint. Almost always, the article is outdated, the platform/toolset is completely different, etc. so as Roddy says, you must measure yourself on your platform.
I'm not necessarily defending any of these positions, I'm just giving you real-world feedback / explanations I've heard from many firms working with C++ on embedded systems, since your question is "why do so many embedded developers avoid exceptions?"
Upvotes: 60
Reputation: 68033
I think it's mostly FUD, these days.
Exceptions do have a small overhead at the entry and exit to blocks that create objects that have constructors/destructors, but that really shouldn't amount to a can of beans in most cases.
Measure first, Optimize second.
However, throwing an exception is usually slower than just returning a boolean flag, so throw exceptions for exceptional events only.
In one case, I saw that the RTL was constructing entire printable stack traces from symbol tables whenever an exception was thrown for potential debugging use. As you can imagine, this was Not a Good Thing. This was a few years back and the debugging library was hastily fixed when this came to light.
But, IMO, the reliability that you can gain from correct use of exceptions far outweighs the minor performance penalty. Use them, but carefully.
Edit:
@jalf makes some good points, and my answer above was targeted at the related question of why many embedded developers in general still disparage exceptions.
But, if the developer of a particular platform SDK says "don't use exceptions", you'd probably have to go with that. Maybe there are particular issues with the exception implementation in their library or compiler - or maybe they are concerned about exceptions thrown in callbacks causing issues due to a lack of exception safety in their own code.
Upvotes: 9
Reputation: 33116
An opinion to the contrary of the "gotos are evil" espoused in the other answers. I'm making this community wiki because I know that this contrary opinion will be flamed.
Any realtime programmer worth their salt knows this use of goto
. It is a widely used and widely accepted mechanism for handling errors. Many hard realtime programming environments do not implement < setjmp.h >. Exceptions are conceptually just constrained versions of setjmp
and longjmp
. So why provide exceptions when the underlying mechanism is banned?
An environment might allow exceptions if all thrown exceptions can always be guaranteed to be handled locally. The question is, why do this? The only justification is that gotos are always evil. Well, they aren't always evil.
Upvotes: 5
Reputation: 119877
This attitude towards exceptions has nothing to do whatsoever with performance or compiler support, and everything to do with an idea that exceptions add complexity to the code.
This idea, as far as I can tell, is nearly always a misconception, but it seems to have powerful proponents for some inconceivable reason.
Upvotes: 5
Reputation: 247969
I can think of a couple of possible reasons:
Upvotes: 20