Reputation: 381
In C++ Primer, by Stanley B. Lippman, the section on "Implicit Conversions" says that:
int ival; unsigned int ui; float fval; fval = ui - ival * 1.0;
ival
is converted to double, then multiplied by1.0
. The result is converted tounsigned int
, then subtracted byui
. The result is converted tofloat
, then assigned tofval
.
But I don't think so: I think that in fact ival
is converted to double then multiplied by 1.0
then ui
is which is of type unsigned int
is converted to double not the contrary and then the result of multiplication is subtracted from the converted to converted to double ui
value. finally convert this final double value to float and assign it to fval
.
To ensure what I say:
ival = 5;
ui = 10;
fval = 7.22f;
dval = 3.14;
std::cout << typeid(ui - ival * 1.0).name() << std::endl; // double
std::cout << (ui - ival * 1.7) << std::endl; // 1.5 this proves that the unsigned int ui is converted to double not the contrary that is because C++ preserves precision. otherwise the decimal part is truncated.
Upvotes: 6
Views: 328
Reputation: 180595
Your assumption is correct and the book is wrong.
fval = ui - ival * 1.0;
can be rewritten as
fval = ui - (ival * 1.0);
so that gives us
float = unsigned - (int * double)
The (int * double)
becomes a double
because of the usual arithmetic conversions giving us
float = unsigned - double
which again results is in a double
and we assign that double
to the float
variable.
Upvotes: 8