Reputation: 1507
On a project I am working, I noticed that a dynamic downcast fails. A quick code examination confirmed that the particular object was actually never of that type. However, I saw that other developers enforce this very same cast by applying a static cast instead.
In my opinion, this is a dangerous approach and quite fragile, as shown in the example below. Isn’t this generally a bad programming practice?
class Base
{
public:
Base(){ m_p = 0; }
protected:
int m_p;
};
class Derived: public Base
{
public:
Derived(){ m_arr = new int[1]; }
void Add_A() { m_p += 2; }
void Add_B() { m_arr[0] += 3; }
private:
int* m_arr;
};
Base* parent = new Base();
// obviously fails -> c_d is null
Derived* c_d = dynamic_cast<Derived*>(parent);
Derived* c_s = static_cast<Derived*>(parent);
c_s->Add_A(); // works
c_s->Add_B(); // crashes, since m_arr does not exist.
Upvotes: 0
Views: 415
Reputation: 50111
Using static_cast
because the dynamic_cast
fails is not just bad practice, it pretty much guarantees your code is incorrect and broken. As you demonstrated with your crash, you should never do that and actually fix the code instead; using the other cast does not solve any of your problems.
However, there is a reason to downcast with static_cast
, namely dynamic_cast
's runtime cost. If you are absolutely certain that a dynamic_cast
would always succeed, you can replace it with a static_cast
where performance constraints require that.
(Although in any case you should rethink why you need to downcast in the first place. Also, if dynamic_cast
is actually too slow for your usecase, there's a good chance you don't want virtual function calls to begin with.)
Upvotes: 8
Reputation: 275956
What they are doing is invoking undefined behaviour.
In this case, the undefined behaviour is doing what they want. Within this particular source. On this particular compiler. With those particular compiler options.
This will make it very tempting to continue doing it. After all, it works.
The cost is that every time you update your compiler, you should check it works. Every time you modify a compiler flag, you should check it works. Every time you use it, you should check it works. Times each other.
A new version of the compiler (or new compiler, or new flag on existing compiler) can perfectly legitimately and reasonably inline an entire branch of code, note that it deterministically contains undefined behaviour, then decide the entire branch must be unreachable, and optimize the entire branch away (sometimes including the test).
This is not a theoretical danger. It happened with int overflow, where unsigned to int undefined behaviour was proven and branches eliminated retroactively.
"It works" is step 1. Is it worth it?
Upvotes: 3