Reputation: 9
I have a question while studying C language.
printf("%d, %f \n", 120, 125.12);
The output value in this code is:
120, 125.120000
That's it. And
printf("%d, %f \n", 120.1, 125.12);
The output value of this code is:
1717986918, 0.000000
That's it.
Of course, I understand that the output value of %d in front of us was a strange value because I put a mistake in the integer format character. But I don't know why %f is 0.000000 after that. Does the character on the back have an effect on the value of the previous character?
I'm asking because I'm curious because it keeps coming out like this even if I put in a different price.
Upvotes: 0
Views: 92
Reputation: 140880
why %f is 0.000000 after that.
According to the C standard because %d
expects an int
and you passed a double
, undefined behavior happens and anything is allowed to happen.
Anyway: I used http://www.binaryconvert.com/convert_double.html to convert doubles to bits:
120.1 === 0x405E066666666666
125.12 === 0x405F47AE147AE148
I guess that your platform is x86-ish, little endian, with ILP32 data model.
The %d
printf format specifier is for int
and sizeof(int)
is 4 bytes or 32 bits in LP64. Because the architecture is little endian and the stack too, the first 4 bytes from the stack are 0x66666666
and the decimal value of it is printed out which is 1717986918
.
On the stack are left three 32-bit little endian words, let's split them 0x405E0666 0x147AE148 0x405F47AE
.
The %f
printf
format specifier is for double
and sizeof(double)
on your platform is 8 or 64 bits. We already read 4 bytes from the stack, and now we read 8 bytes. Remembering about the endianess, we read 0x147AE148405E0666
that is converted to double. Again I used this site and converted the hex to a double, it resulted in 5.11013578783948654276686949883E-210
. E-210
- this is a very, very small number. Because %f
by default prints with 6 digits of precision, the initial 7 digits of the number are 0, so only zeros are printed. If you would use %g
printf format specifier, then the number would be printed as 5.11014e-210
.
Upvotes: 2
Reputation:
This can be attributed to the fact that the printf
function is not type-safe, i.e. there is no connection between the specifiers in the format string and the passed arguments.
The function converts all integer arguments to int
(4 bytes) and all floats to double
(8 bytes). As the types are guessed from the format string, a shift in the data occurs. It is likely that the 0.000000
appears because what is loaded corresponds to a floating-point zero (biaised exponent = 0).
Try with 1.0, 1.e34
.
Upvotes: 0
Reputation: 24052
printf
is a variadic function, meaning its argument types and counts are variable. As such, they must be extracted according to their type. Extraction of a given argument is dependent on the proper extraction of the arguments that precede it. Different argument types have different sizes, alignments, etc.
If you use an incorrect format specifier for a printf
argument, then it throws printf
out of sync. The arguments after the one that was incorrectly extracted will not, in general, be extracted correctly.
In your example, it probably extracted only a portion of the first double
argument that was passed, since it was expecting an int
argument. After that it was out of sync and improperly extracted the second double
argument. This is speculative though, and varies from one architecture to another. The behavior is undefined.
Upvotes: 2