Reputation: 187
#include <stdio.h>
int main()
{
printf("%*.*d\n", -6 , 7,20000);
printf("%*.*d\n", 5, -6, 2000);
return 0;
}
Output:
0020000
2000
I don't understand how does printf
interpret the format specifier * . * ?
While in the first call to printf()
,does the later 7 override the former -6? So that the size of output width turns to 7?
Upvotes: 6
Views: 2132
Reputation:
this is source code of printf in glibc
/* VARARGS1 */
int __printf (const char *format, ...)
{
va_list arg;
int done;
va_start (arg, format);
done = vfprintf (stdout, format, arg);
va_end (arg);
return done;
}
as you can see printf gets parameters from va_list
here's another example, to show you how printf works :
/* va_start example */
#include <stdio.h> /* printf */
#include <stdarg.h> /* va_list, va_start, va_arg, va_end */
void PrintFloats (int n, ...)
{
int i;
double val;
printf ("Printing floats:");
va_list vl;
va_start(vl,n);
for (i=0;i<n;i++)
{
val=va_arg(vl,double);
printf (" [%.2f]",val);
}
va_end(vl);
printf ("\n");
}
int main ()
{
PrintFloats (3,3.14159,2.71828,1.41421);
return 0;
}
Upvotes: 0
Reputation: 215277
The argument to the *
before the .
is the field width and the argument to the *
after the .
is the precision.
Field widths are the minimum number of bytes that will be output as a result of the conversion; the output will be padded (by default, on the left with spaces, but left zero padding and right space padding are also options, controlled by flags) if fewer bytes would be produced. A negative argument to the *
for width is interpreted as the corresponding positive value with the -
flag, which moves the padding to the right (i.e. left-justifies the field).
Precision on the other hand has a meaning that varies according to the conversion being performed. Negative precisions are treated as if no precision had been specified at all. For integers, it's the minimum number of digits (not total output) to be produced; if fewer digits would be produced, zeros are added to the left. An explicit precision of 0 results in no digits being produced when the value is 0 (instead of a single 0
being produced). For strings, precision limits the number of output bytes, truncating the string (and permitting a longer, non-null-terminated, input array) if necessary. For floating point specifiers, precision controls the number of places printed, either after the radix point (for %f
) or the total places of significance (for the other formats).
In your examples:
printf("%*.*d\n", -6 , 7,20000);
Here the field is left-aligned (padding on right) with a minimum width of 6, but the field ends up being wider anyway, so the width is ignored. The precision of 7 forces integer output to be at least 7 digits, so you end up with 0020000
as the converted field contents, which already exceeded the width.
In the other one:
printf("%*.*d\n", 5, -6, 2000);
The field width is 5, with the default right alignment; padding is with spaces on the left. The negative precision is ignored, as if it were not specified, so the converted field contents are 2000
, only 4 bytes, which get padded up to 5 bytes to fill the width by means of a single leading space.
Upvotes: 8
Reputation: 9899
printf("%*.*d\n", -6 , 7,20000);
is same as
printf("%-6.7d\n, 20000);
This just provide a way for dynamic format.
Upvotes: 3