Reputation:
In the code below,
#include <stdio.h> float area(float var) { return (var*var); } int main() { float side; printf("\nEnter the side of square : "); scanf("%f",&side); **printf("The area is : %.1f",area(side));** return 0; }
Input is 15.5, if format specifier "%.1f" is used then the output is:
240.3
and when,
Input is 15.5, with format specifier "%.2f" then the output is :
240.25 (which is actually the correct value)
Upvotes: 0
Views: 201
Reputation: 385645
The spec mandates rounding. Of the f
format specifier, the C17 spec says
A double argument representing a floating-point number is converted to decimal notation in the style [-]ddd.ddd, where the number of digits after the decimal-point character is equal to the precision specification. If the precision is missing, it is taken as 6; if the precision is zero and the # flag is not specified, no decimal-point character appears. If a decimal-point character appears, at least one digit appears before it. The value is rounded to the appropriate number of digits.
Note the last sentence.
Now, there are many ways of rounding. I think printf
is directed to honour the "directed rounding mode" set by fesetround
, and it's no exception on my system.
#include <fenv.h>
#include <stdio.h>
// gcc doesn't recognize or need this.
#pragma STDC FENV_ACCESS ON
int main( void ) {
fesetround( FE_DOWNWARD );
printf( "%.1f\n", 0.66 ); // 0.6
fesetround( FE_TONEAREST );
printf( "%.1f\n", 0.66 ); // 0.7
}
Demo on Compiler Explorer
My system defaults to rounding to nearest. When it needs to choose which of two numbers are nearest (e.g. for 0.5), it rounds to nearest even number.
#include <stdio.h>
int main( void ) {
printf( "%.0f\n", 0.5 ); // 0
printf( "%.0f\n", 1.5 ); // 2
printf( "%.0f\n", 2.5 ); // 2
printf( "%.0f\n", 3.5 ); // 4
printf( "%.0f\n", 4.5 ); // 4
}
Demo on Compiler Explorer
(I used .0f
and .5
since 5/10 can be represented exactly by a floating point number.)
I don't know if the spec mandates defaulting to round to nearest or not, but that's the natural thing to do when one wants to reduce the number of decimal places in a number.
It would also produce a lot of weird results if it didn't do this. Would you expect printf( "%.1f\n", 0.3 );
to print 0.2
? Well, it would if you rounded down instead of rounding to nearest.
So many number are periodic in binary, including 1/10, 2/10, 3/10, 4/10, 6/10, 7/10, 8/10 and 9/10. These can't be represented exactly using floating point numbers. Ideally, the compiler uses the nearest representable number instead, and this number is sometimes a little higher, sometimes a little lower.
#include <stdio.h>
int main( void ) {
printf( "%.100g\n", 0.1 ); // 0.1000000000000000055511151231257827021181583404541015625
printf( "%.100g\n", 0.3 ); // 0.299999999999999988897769753748434595763683319091796875
}
Demo on Compiler Explorer
If printf
were to truncate, printf( "%.1f\n", 0.3 );
would print 0.2
.
#include <fenv.h>
#include <stdio.h>
// gcc doesn't recognize or need this.
#pragma STDC FENV_ACCESS ON
int main( void ) {
fesetround( FE_DOWNWARD );
printf( "%.1f\n", 0.3 ); // 0.2
fesetround( FE_TONEAREST );
printf( "%.1f\n", 0.3 ); // 0.3
}
Demo on Compiler Explorer
Finally, I don't find anything in the spec about how to round to nearest. The decision to round to even appears to be the compiler's.
This tie-breaking rule has no positive/negative bias and no bias toward/away from zero, making it a natural choice. It's even "the default rounding mode used in IEEE 754 operations for results in binary floating-point formats."
Upvotes: 2