Reputation: 14418
I am aware it is a very basic question,but still I have a few confusions.
char *p = malloc(100);
sprintf (p,".. %03.1f .. \n", 2.5);
Result : 2.5
char *p = malloc(100);
sprintf (p,".. %05.1f .. \n", 2.5);
Result : 002.5
So, is my understanding is correct if I say:
%05.1
-> represents in total of 5 place holders wherein .
is also counted?Upvotes: 0
Views: 644
Reputation: 754590
In the format string %05.1f
, the total field width will be (at least) 5, there will be one digit after the decimal point, and the field will be zero-padded on the left if it would be shorter than 5 positions without the padding. Note that if the number needs more than 5 positions, it will use them (try printing 1.2E37
, for example). For full details, read a specification such as the POSIX specification for printf()
.
For example:
#include <stdio.h>
int main(void)
{
double values[] =
{
0, 0.1, -1.0, 3.1415, -99.9, -123.4, -2345.6,
88.8, 777.77, 9876.54, -1000000.2, 222333444.555,
};
enum { NUM_VALUES = sizeof(values)/sizeof(values[0]) };
for (int i = 0; i < NUM_VALUES; i++)
printf("%16.5f = %05.1f\n", values[i], values[i]);
return 0;
}
Example output:
0.00000 = 000.0
0.10000 = 000.1
-1.00000 = -01.0
3.14150 = 003.1
-99.90000 = -99.9
-123.40000 = -123.4
-2345.60000 = -2345.6
88.80000 = 088.8
777.77000 = 777.8
9876.54000 = 9876.5
-1000000.20000 = -1000000.2
222333444.55500 = 222333444.6
Upvotes: 0
Reputation: 392
No ! 0x.yf
what it represents is : at most x
numbers to the left of the decimal, and y
represents at most y numbers to the left, it doesn't count the .
though !
.number
For integer specifiers (d, i, o, u, x, X) − precision specifies the minimum number of digits to be written. If the value to be written is shorter than this number, the result is padded with leading zeros. The value is not truncated even if the result is longer. A precision of 0 means that no character is written for the value 0. For e, E and f specifiers − this is the number of digits to be printed after the decimal point. For g and G specifiers − This is the maximum number of significant digits to be printed. For s − this is the maximum number of characters to be printed. By default all characters are printed until the ending null character is encountered. For c type − it has no effect. When no precision is specified, the default is 1. If the period is specified without an explicit value for precision, 0 is assumed.
see this https://ideone.com/RAMYdj
Upvotes: 0
Reputation: 2776
The fprintf man page says 'The field width. An optional decimal digit string (with nonzero first digit) specifying a minimum field width. ...'. Since '.' occupies a place in the field, it should be counted.
Upvotes: 1