Reputation: 24788
$ gcc --version
Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/4.2.1
Apple clang version 12.0.5 (clang-1205.0.22.9)
Target: x86_64-apple-darwin20.3.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
$ uname -a
Darwin MacBook-Air.local 20.3.0 Darwin Kernel Version 20.3.0: Thu Jan 21 00:07:06 PST 2021; root:xnu-7195.81.3~1/RELEASE_X86_64 x86_64
Code:
#include <stdio.h>
int main(void) {
int i = 2;
printf("int \"2\" as %%.128f: %.128f\n", i);
printf("int \"2\" as %%.128lf: %.128lf\n", i);
printf("int \"2\" as %%.128LF: %.128Lf\n", i);
return 0;
}
Compile:
$ gcc floatingpointtypes.c
Execute:
$ ./a.out
int "2" as %.128f: 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
int "2" as %.128lf: 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
int "2" as %.128LF: 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
When binary of integer 2 is interpreted as IEEE-754 single precision (32 bit) or double precision (64 bit) floating point number format then it is a denormalized float (exponent bits are all 0s) and the resultant value in decimal is 2e-148.
Question:
Why does my code print 0s? Is it because C can't interpret denormalized floating point numbers to their correct value ?
Upvotes: 1
Views: 474
Reputation: 126546
If you want to convert the 'bits' of an int to a floating point type, the easiest way to do that is with a union:
#include <stdio.h>
#include <stdint.h>
union u32 {
int32_t i;
float f;
};
union u64 {
int64_t i;
double d;
};
int main() {
union u32 a;
union u64 b;
a.i = 2;
b.i = 2;
printf("%g\n", a.f);
printf("%g\n", b.d);
return 0;
}
resulting output:
2.8026e-45
9.88131e-324
What is likely happening with your code is that you're using a system which passes arguments in different registers depending on the type (integer registers for int values and floating point registers for fp types). So the call puts 2 into an integer register, but the value getting printed by %f
is whatever value happens to be in the first floating point register used for arguments -- probably 0 as no fp code has run before the call.
Upvotes: 6
Reputation: 444
In addition to undefined behavior, you overestimated the size of the float because the %f
specifier expects double-precision floating-point numbers. The exact value of the number printed is 2^-1073
. The first non-zero happens 324 decimal digits after the decimal place, but you were only printing the first 128 digits. Try:
#include <stdio.h>
int main(void)
{
long long i = 2;
printf("long long \"2\" as %%.1073f: %.1073f\n", *(double *) &i);
return 0;
}
Output:
long long "2" as %.1073: 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000988131291682493088353137585736442744730119605228649528851171365001351014540417503730599672723271984759593129390891435461853313420711879592797549592021563756252601426380622809055691634335697964207377437272113997461446100012774818307129968774624946794546339230280063430770796148252477131182342053317113373536374079120621249863890543182984910658610913088802254960259419999083863978818160833126649049514295738029453560318710477223100269607052986944038758053621421498340666445368950667144166486387218476578691673612021202301233961950615668455463665849580996504946155275185449574931216955640746893939906729403594535543517025132110239826300978220290207572547633450191167477946719798732961988232841140527418055848553508913045817507736501283943653106689453125
Upvotes: 2