Reputation: 1554
Consider the simple code :
struct A;
struct B {
B(){}
B(A const&){ }
};
struct A {
operator int() const {return 0;};
};
void func(B){}
void func(char){}
int main()
{
func(A()); //ambiguous call oO
}
First of all I'm not sure if I understand everything correctly, so correct me anytime you find me wrong please.
My understanding was that that void func(B)
should have been chosen, since argument to func
is A
which is class type, hence type of conversion required is "User defined conversion sequence"
Now from IBM C++ ref :
A user-defined conversion sequence consists of the following:
- A standard conversion sequence
- A user-defined conversion
- A second standard conversion sequence
Now there are two user defined conversion present
B::B(const A&)
and A::operator int (const A&);
so the sequence are
-> A()
-> B::B(const A&)
-> Standard conversion (identity conversion)
-> A()
-> A::operator int (const A&)
-> Standard conversion (integral conversion)
since integral conversion is worse than identity conversion I thought void func(B)
would called but still the call is ambiguous .
So please help me at which point am I wrong and why the call is ambiguous. Thanks a lot :)
Upvotes: 9
Views: 625
Reputation: 33014
so the sequence are -> A() -> B::B(const A&) -> Standard conversion (identity conversion)
No! Excerpt from the standard (draft) [over.best.ics] (emphasis mine):
- If no conversions are required to match an argument to a parameter type, the implicit conversion sequence is the standard conversion sequence consisting of the identity conversion (13.3.3.1.1).
func(A())
is not identity, it's user-defined. Again from the standard, [[conv]]:
For class types, user-defined conversions are considered as well; see 12.3. In general, an implicit conversion sequence (13.3.3.1) consists of a standard conversion sequence followed by a user-defined conversion followed by another standard conversion sequence.
I think you have a misunderstanding about Standard conversions. They have nothing to do with user-defined types/classes. Standard conversions are only for built-in types: lvalue-to-rvalue conversion, array-to-pointer conversion, function-to-pointer conversion, integral promotions, floating point promotion, integral conversions, floating point conversions, floating-integral conversions, pointer conversions, pointer to member conversions, boolean conversions and qualification conversions. A
-> int
is not any of these but a user-defined conversion. The standard on user-defined conversions, [[class.conv]] i.e. 12.3:
Type conversions of class objects can be specified by constructors and by conversion functions. These conversions are called user-defined conversions and are used for implicit type conversions (Clause 4), for initialization (8.5), and for explicit type conversions (5.4, 5.2.9).
You have two user-defined conversion sequences of the same rank (see M.M's answer to know why), so the compiler wants you to disambiguate.
Upvotes: 3
Reputation: 141633
The two conversion sequences here, A -> B
and A -> int
are both user-defined because they operate via functions which you defined.
The rule for ranking user-defined conversion sequences is found in 13.3.3.2 (N3797):
User-defined conversion sequence
U1
is a better conversion sequence than another user-defined conversion sequenceU2
if they contain the same user-defined conversion function or constructor or they initialize the same class in an aggregate initialization and in either case the second standard conversion sequence ofU1
is better than the second standard conversion sequence ofU2
These two conversion sequences don't contain the same user-defined conversion function, and they don't initialize the same class in aggregate initialization (since one initializes int
).
So it is not true that one sequence ranks above the other, therefore this code is ambiguous.
Upvotes: 7