Reputation: 3055
Today I've a weird question.
The Code(C++)
#include <iostream>
union name
{
int num;
float num2;
}oblong;
int main(void)
{
oblong.num2 = 27.881;
std::cout << oblong.num << std::endl;
return 0;
}
The Code(C)
#include <stdio.h>
int main(void)
{
float num = 27.881;
printf("%d\n" , num);
return 0;
}
The Question
As we know, C++ unions can hold more than one type of data element but only one type at a time. So basically the name oblong
will only reserve one portion of memory which is 32-bit (because the biggest type in the union is 32-bit, int and float) and this portion could either hold a integer or float.
So I just assign a value of 27.881 into oblong.num2
(as you can see on the above code). But out of curiosity, I access the memory using oblong.num
which is pointing to the same memory location.
As expected, it gave me a value which is not 27 because the way float and integer represented inside a memory is different, that's why when I use oblong.num
to access the memory portion it'll treat that portion of memory value as integer and interpret it using integer representation way.
I know this phenomena also will happen in C , that's why I initialize a float type variable with a value and later on read it using the %d
.So I just try it out by using the same value 27.881
which you can see above. But when I run it, something weird happens, that is the value of the one I get in C is different from C++.
Why does this happen? From what I know the two values I get from the two codes in the end are not garbage values, but why do I get different values? I also use the sizeof to verified both C and C++ integer and float size and both are 32-bit. So memory size isn't the one that causes this to happen, so what prompt this difference in values?
Upvotes: 6
Views: 647
Reputation: 471499
First of all, having the wrong printf()
format string is undefined behavior. Now that said, here is what is actually happening in your case:
In vararg functions such as printf()
, integers smaller than int
are promoted to int
and floats smaller than double
are promoted to double
.
The result is that your 27.881
is being converted to an 8-byte double as it is passed into printf()
. Therefore, the binary representation is no longer the same as a float
.
Format string %d
expects a 4-byte integer. So in effect, you will be printing the lower 4-bytes of the double-precision representation of 27.881
. (assuming little-endian)
*Actually (assuming strict-FP), you are seeing the bottom 4-bytes of 27.881
after it is cast to float
, and then promoted to double
.
Upvotes: 9
Reputation: 40633
In both cases you are encountering undefined behaviour. Your implementation just happens to do something strange.
Upvotes: 2