Nicola
Nicola

Reputation: 631

C#: Wrong result when converting expression with floats to int

Take the following code:

float a = 100.0f;
float b = 0.05f;

If I want to convert the result of division between a and b to an integer, I can do it with this code:

float c = a / (b * 1000.0f); // c = 2f
int res = (int)c; // res = 2

But I want to reduce the number of code lines, so I prefer to use this code instead:

int res = (int)(a / (b * 1000.0f)); // Here we are --> res = 1 !

Why with the last code I obtain res = 1 instead of res = 2 ?

Upvotes: 3

Views: 385

Answers (1)

Eric Postpischil
Eric Postpischil

Reputation: 222526

The compiler uses extra precision when computing some expressions. In the C# language specification, clause 9.3.7 allows an implementation to use more precision in a floating-point expression than the result type:

Floating-point operations may be performed with higher precision than the result type of the operation.

Note that the value of .05f is 0.0500000007450580596923828125. When .05f * 1000.0f is computed with float precision, the result is 50, due to rounding. However, when it is computed with double or greater precision, the result is 50.0000007450580596923828125. Then dividing 100 by that with double precision produces 1.999999970197678056393897350062616169452667236328125. When this is converted to int, the result is 1.

In float c = a / (b * 1000.0f);, the result of the division is converted to float. Even if the division is computed with double precision and produces 1.999999970197678056393897350062616169452667236328125, this value becomes 2 when rounded to float, so c is set to 2.

In int res = (int)(a / (b * 1000.0f));, the result of the division is not converted to float. If the compiler computes it with double precision, the result is 1.999999970197678056393897350062616169452667236328125, and converting that produces 1.

Upvotes: 4

Related Questions