Reputation: 38795
I recently stumbled over code like this:
uint32_t val;
...
printf("%.08X", val);
which is confusing to me. I mean, either specify 0
+width or specify precision, what's the point of both?
The width argument ... controls the minimum number of characters that are output. If the number of characters in the output value is less than the specified width, blanks are added ... . If width is prefixed by 0, leading zeros are added ... .
The width specification never causes a value to be truncated. ...
... It consists of a period (.) followed by a non-negative decimal integer that, depending on the conversion type, specifies ... the number of significant digits to be output.
The type determines either the interpretation of precision or the default precision when precision is omitted ...
d, i, u, o, x, X
- The precision specifies the minimum number of digits to be printed. If the number of digits in the argument is less than precision, the output value is padded on the left with zeros. The value is not truncated when the number of digits exceeds precision.
So I'd either use "%08X"
or "%.8X"
but "%.08X"
doesn't make any sense to me.
It does however seem that it doesn't make any difference, that is, all three variants seem to produce the same output.
Upvotes: 3
Views: 2320
Reputation: 10726
You're correct:
"%08X"
, "%.8X"
and "%.08X"
are equivalent.
As for why - refer to this:
http://www.cplusplus.com/reference/cstdio/printf/
Hence:
In case 1, this one uses specifies the width:
Minimum number of characters to be printed. If the value to be printed is shorter than this number, the result is padded with blank spaces. The value is not truncated even if the result is larger.
Hence:
%08X
will print a minimum of 8 characters
and from this reference:
For integer specifiers (d, i, o, u, x, X): precision specifies the minimum number of digits to be written. If the value to be written is shorter than this number, the result is padded with leading zeros. The value is not truncated even if the result is longer. A precision of 0 means that no character is written for the value 0. For a, A, e, E, f and F specifiers: this is the number of digits to be printed after the decimal point (by default, this is 6). For g and G specifiers: This is the maximum number of significant digits to be printed. For s: this is the maximum number of characters to be printed. By default all characters are printed until the ending null character is encountered. If the period is specified without an explicit value for precision, 0 is assumed.
%.8X
uses the precision specifier. As such, it too, will also print a minimum of 8 characters.
And lastly:
%.08X
will also print a minimum of 8 characters (again, because of the precision specifier). Why? Because 08
is interpreted as 8
- resulting in the same output as the previous. This may not seem to make sense for single digit precision specification outputs, but in a case like this:
%0.15X
It can matter.
These different formats exist to allow finer control of output (which in my opinion - is a carry over that resembles Fortran a lot).
However, as you've discovered, this overcompensation for finer control of precision allows you to get the same output - but with different flags.
UPDATE:
As hvd pointed out, which I had forgotten to mention: the X
specifier requires an unsigned value, so in this case your output is the same for %08X
and %.8X
(due to there being no sign). However, for something like: %08d
and %.8d
- it isn't as: one pads to 8 digits, the other to 8 characters, so they behave differently for negative values.
Upvotes: 5