Reputation: 167
When I run this code:
double d = 0.0;
for (int i = 0; i < 90; i++)
{
d += .01;
d %= 1;
Console.WriteLine(d);
}
I would expect the output to be
0.01
0.02
0.03
...
0.9
It acts like that until it gets to what should be 0.81. This is the output I see:
...
0.8
0.810000000000001
0.820000000000001
...
0.900000000000001
So what's going on here?
Upvotes: 0
Views: 183
Reputation: 13450
Use decimal
:
decimal d = 0.0m;
for (int i = 0; i < 90; i++)
{
d += .01m;
d %= 1;
Console.WriteLine(d);
}
Upvotes: 0
Reputation: 43046
0.01 is a decimal fraction: 1/100. Binary floating-point numbers can only represent fractions with a finite number of digits if the denominator is a power of two. One hundred, of course, is not a power of two. (Decimal fractions can be finite if the denominator's only prime factors are 2 and 5, which are of course the prime factors of 10.)
Because the binary representation of 0.01 is infinite, the finite number that your program adds repeatedly is not exactly equal to 0.01; after a time, the inaccuraccy accumulates to the point that you see it in the formatted output.
See http://en.wikipedia.org/wiki/Double_precision_floating-point_format for more information.
The best way to do this particular task would be to change the repeated addition into integer addition. For example, you could change d += 0.01;
into d = (double)(i + 1) / 100;
.
Upvotes: 5
Reputation: 556
try this
double d = 0.0;
for (int i = 0; i < 90; i++)
{
d += .01;
d %= 1;
Console.WriteLine(Math.Round(d, 2));
}
I think you do not get the expected result due to rounding
Upvotes: 1