Reputation: 889
I'm new to C programming language, and I'd like to ask a question.
Integer i here is casting to float then f (somehow) successfully represents 5.0:
int i = 5;
float f = i; //Something happened here...
However if we try this approach:
int i = 5;
float f = *(float *)&i;
f would NOT get 5.0 since it interprets the bits stored in i in "float's way". So what magic the complier actually does in the first case? It seems a quite effort-taking job... Can someone specify that? Thanks.
Upvotes: 12
Views: 29337
Reputation: 6050
Well, I just compiled the code in question under VC++ and looked at the disassembly:
int i = 5;
00A613BE mov dword ptr [i],5
float f = i;
00A613C5 fild dword ptr [i]
00A613C8 fstp dword ptr [f]
First assembly statement moves 5 into the memory represented by i. Second statement, fild, converts the memory represented by i into a floating point number and pushes it onto the FPU stack. The third statement, fstp, takes the memory on the FPU stack and moves it to f.
Upvotes: 1
Reputation: 96
convert int bits to float
float IntBitsToFloat(long long int bits)
{
int sign = ((bits & 0x80000000) == 0) ? 1 : -1;
int exponent = ((bits & 0x7f800000) >> 23);
int mantissa = (bits & 0x007fffff);
mantissa |= 0x00800000;
// Calculate the result:
float f = (float)(sign * mantissa * Power(2, exponent-150));
return f;
}
Upvotes: 3
Reputation: 39109
The magic depends on your platform.
One possibility is that your CPU has a special instruction to copy floating point numbers into integral registers.
Of course someone has to design these CPUs, so this is not really an explanation for the algorithm at hand.
A platform might be using a floating point format that goes like this (actually, this is a fixed-point format for the sake of example):
[sIIIIFFFF]
where s
is the sign, the I
s are the part before the dot, the F
s are the part after the dot, e.g. (dot is virtual and only for presentation)
- 47.5000
[sIIII.FFFF]
in this case conversion is almost trivial and can be implemented using bitshifting:
-47.5000
>> 4
---------------
-47
And like in this example, commodity C++ implementations use a floating point representation often referred to as IEEE Floating Point, see also IEEE 754-1985. These are more complicated than fixed-point numbers, as they really designate a simple formula of the form _s*mn, however, they have a well defined interpretation and you can unfold them into something more suitable.
Upvotes: 1
Reputation: 71120
On the IA32 systems, the compiler would generate the following:-
fild dword ptr [i] ; load integer in FPU register, I believe all 32bit integers can be represented exactly in an FPU register
fstp dword ptr [f] ; store fpu register to RAM, truncating/rounding to 32 bits, so the value may not be the same as i
Upvotes: 2
Reputation: 24423
If you look at the assembly
int i = 5;
000D139E mov dword ptr [i],5
float f = i;
000D13A5 fild dword ptr [i]
000D13A8 fstp dword ptr [f]
fild is what does the magic
Upvotes: 8
Reputation: 279395
It is an effort-taking job, but any CPU that has floating-point support will provide an instruction that does it.
If you had to convert a 2's complement int
to IEEE float format for yourself, you would:
n
bits of the int (starting from the bit after the first set non-sign bit) into the significand of the float. n
is however many bits of significand there are in a float
(23 for a 32 bit single-precision float). If there are any remaining bits in the int
(that is, if it's greater than 224), and the next bit after the ones you have room for is 1
, you may or may not round up depending on the IEEE rounding mode in operation.int
to the float
.Upvotes: 14
Reputation: 5794
In almost all modern systems, the specification for floating point arithmetic is the IEEE754 standard. This details everything from layout in memory to how truncation and rounding is propagated. It's a big area and someting you quite often need to take detailed account of in scientific and engineering programming.
Upvotes: 0