Reputation: 22123
For example, if I had a struct FooSpec
:
struct FooSpec { /* lots of data */ };
Is it bad practice to provide both of the following as overloaded functions? Or should I just choose one and stick with it?
Foo *createFoo(const FooSpec *spec) { /* ... */ }
Foo *createFoo(const FooSpec &spec) { return createFoo(&spec); }
Could having both cause unexpected problems?
Upvotes: 19
Views: 2272
Reputation: 42838
There should be only one way to do a specific thing. This will make the code clearer for its users. Therefore, you should pick only one, depending on the semantics:
If a nullptr
is a meaningful value to be passed to createFoo
, then declare the function to take a pointer.
If a nullptr
shouldn't be passed to createFoo
, then declare the parameter a reference to make the compiler yell at people who try pass in a nullptr
.
Now the users of your code will go "Ah, I can pass a null to that function, cool!" or "Okay this function needs an actual object to work properly".
Stick to this convention throughout a project to make the interface of the code more coherent.
Or as WhozCraig put it,
Pick the appropriate model and tell the clients "this is your interface."
Upvotes: 21
Reputation: 14730
A software codebase is not a static object, it evolves† with the needs of those who use it. If a confusing interface is "bad practice" (I would rather say a silly thing to do), a codebase may end up in such situation as it evolves from refactoring and code improvement.
In this instance, it's not bad practice if you're in the process of migrating your coding conventions to use references instead of const pointers (or the opposite, but that doesn't make much sense to me). The bad practice here would probably to make these evolutions and not document it, with other people not knowing that one of these methods is deprecated, or simply which one to use.
There's also the possibility of passing a nullptr_t
(or NULL
pointers for old timers like me) With const pointers. I am not sure of the benefit of this, given that you can overload methods on their arguments, but again in the context of a changing API it makes sense.
† it's alive!!!
Upvotes: 3
Reputation: 275500
I can think of one marginally useful reason to do this.
The Foo*
version can check for null, handle that case, and if not use the reference case.
The Foo&
can then not check for null.
This lets callers say "I know this is not null because I checked", saving a branch within the function.
However this benefit (saving a branch) is pointless once the cost of the method passes a very low threshhold.
Upvotes: 5