Reputation: 1680
In the context of multilayered development, with a core library and client code controlled in completely unrelated development teams (many client dev teams), what is the most effective way to expand the interface of a Base class by adding to a method a new parameter with a default value?
Conceptually, I need to replace (in the core library) this old code:
struct Base
{
virtual void foo() {}
vitrual ~Base() {}
}
with this new code:
struct Base
{
virtual void foo(bool b = true) {}
vitrual ~Base() {}
}
The problem is that this will silently break client code such as:
struct Derived: public Base
{
void foo() {}
}
int main()
{
Derived d;
Base &b = d;
b.foo();
}
One solution would be to have both methods, for instance:
struct Base
{
virtual void foo(bool b) {}
virtual void foo() {foo(true);}
vitrual ~Base() {}
}
This adds an unnecessary method, which is not a sustainable approach to library maintenance (interface bloat, cost of maintenance, testing, documentation, etc.). Of course, the old method could be deprecated, but that would imply that new client code would always need to specify the boolean parameter.
another solution could be to provide a new version of the Base class:
struct BaseV2: public Base
{
virtual void foo(bool b = true) {/* delegate impl. */ }
}
This adds an unnecessary class, but at least the deprecation can be handled conveniently for the client side.
What are the other options? What can be done to simplify the introduction of such trivial interface changes in the core library?
Upvotes: 3
Views: 212
Reputation: 2175
A few things might be of help here:
1) Use of the override
keyword will give a compiler warning whenever a method now is shadowing, not overriding a base class method. e.g.:
struct Derived: public Base
{
void foo() override {} // Warns when Base::foo changes
};
Sadly, this is one that you have to rely on your users doing, rather than something you can enforce. If your users experience enough pain, they might go for it.
2) Separate your class's interface from its implementation - in this case the implementation is virtual, and ideally private. e.g.
struct Base {
void foo() { fooImpl(); }
private:
virtual void fooImpl() = 0; // Or provide a default implementation
};
struct Derived : public Base {
private:
void fooImpl() override { ... }
};
This has the benefit that you can add the default argument to foo()
without breaking anything, and then decide what to do about other users of your code base.
If you decide you absolutely need to pass the parameter to client implementers of fooImpl()
without keeping a deprecated version around, then you can change its signature. With a pure virtual then the compiler will stop anybody instantiating classes where an override is now no longer happening, and you don't get a silently broken compile. Pros: no bad builds, cons: work for some of your users even if they don't care about the new functionality.
Alternatively, if you decide the behaviour of your class needs to be delegated to a different function as a result of the parameter e.g. fooImpl2(...)
then in Base::foo
you can test if the variable is the default, and call fooImpl
or fooImpl2
as needed. fooImpl2
needn't just take a redundant copy of the bool parameter of course; your delegating code can call it with entirely different parameters, as long as your foo
implementation can work out what to do from the old method signature plus your new parameter.
Going down the fooImpl2
route you can choose to provide a default implementation (pro: everybody's code compiles and works without effort; con: you have to provide a sensible default implementation) or make this one pure virtual as well (pro: easier for you; con: everybody else's code breaks, even if they don't want to implement your new interface).
Another benefit of this approach is that now know that all users of your interface are coming in through a method that you control, so authentication / logging / common behaviour / pre and post delegation sanity checks can all be done in one place, rather than have everybody half bake their own thing.
3) Perhaps consider mixins, depending on what your new default parameter is intended to achieve. On the pro side, this approach allows ultimate flexibility for you and your users in combining methods, creating new ones, and not having to write new code when nothing has changed. The con side is that the error messages will be inscrutable, and if there are people in the organisation who aren't too familiar with template programming, things could go bad.
Upvotes: 3
Reputation: 106106
This adds an unnecessary method, which is not a sustainable approach to library maintenance (interface bloat, cost of maintenance, testing, documentation, etc.).
That's why software with dependent client code that can't practically be cleaned up as changes are made tends to go through cycles of adding minor cruft, then a cleanup / new version that breaks backwards compatibility.
When that's just totally unacceptable, some hideous alternatives get used - like functions taking containers that can later carry arbitrary runtime-decodable options.... If you're that desperate, sleep on it.
Upvotes: 1