Reputation: 5379
The printf() function in Visual Studio 2012 (Compiler Version 17.00.61030 for x86) uses rounding away from zero when I supress decimal digits. I.e., printf("%.0f%, 0.5)
prints 1
. But when it comes to actually printing decimal digits it seems that it uses some IEEE-like rounding, but just to the nearest uneven. I.e., both printf("%.1f%, 0.05)
and printf("%.1f%, 0.15)
print 1
. I find this strange and inconsistent.
Here is some code:
#include <stdio.h>
int main(int, char**) {
printf("0.5 -> %.0f\n", 0.5);
printf("1.5 -> %.0f\n", 1.5);
printf("-0.5 -> %.0f\n", -0.5);
printf("-1.5 -> %.0f\n", -1.5);
printf("0.05 -> %.1f\n", 0.05);
printf("0.15 -> %.1f\n", 0.15);
printf("0.25 -> %.1f\n", 0.25);
printf("-0.05 -> %.1f\n", -0.05);
printf("-0.15 -> %.1f\n", -0.15);
printf("-0.25 -> %.1f\n", -0.25);
return 0;
}
And the matching output:
0.5 -> 1
1.5 -> 2
-0.5 -> -1
-1.5 -> -2
0.05 -> 0.1
0.15 -> 0.1
0.25 -> 0.3
-0.05 -> -0.1
-0.15 -> -0.1
-0.25 -> -0.3
Am I misunderstanding something? I would assume that rounding to whatever many decimal places always works the same way.
Upvotes: 0
Views: 77
Reputation: 234715
It's down to the fact that only very few numbers can be expressed precisely in floating point.
0.5 can be as it's a dyadic rational. Ditto 0.25.
0.15 can't be and the closest available double is lower than that. Hence the rounding down that you observe.
Upvotes: 3