Reputation: 15511
I want to understand why zero padding works differently for %g and %f formats in printf. Here's a sample program:
#include <stdio.h>
int main()
{
double d = 0.10000000000001;
printf("%04.2f\n", d);
printf("%04.2g\n", d);
return 0;
}
It outputs
0.10
00.1
This is reproducible in both VC++ on Windows and gcc on Linux.
Why it works like this? It is a correct behavior?
UPDATE: I found the answer but I think I'll left the question open...
Upvotes: 3
Views: 9069
Reputation: 254531
This is correct. According to the specification of style g
in C99 7.19.6.1/8:
unless the # flag is used, any trailing zeros are removed from the fractional portion of the result and the decimal-point character is removed if there is no fractional portion remaining
while style f
does not specify that trailing zeros are removed.
Upvotes: 3
Reputation: 6228
When using %g, printf chooses the format which gives the best precision in minimum place. You can guess how it evaluates the precision, knowing that for d = 0.10500000 it prints "00.1" while for d = 0.10500001 it prints "0.11".
Upvotes: 1
Reputation: 548
You can control width and precision by namesake member functions on the std::cout object.
From doc at http://en.cppreference.com/w/cpp/io/basic_ostream precision manages decimal precision of floating point operations (public member function of std::ios_base) width manages field width (public member function of std::ios_base)
Upvotes: 0