Reputation: 35
I am running a test code for learning C. However I am curious regarding representation of negative numbers n hexadecimal, for which I have written a test-code. To my surprise, by running that test-code I am receiving only zero
union unin
{
char chrArray[4];
float flotVal;
}uninObj;
uninObj.flotVal = -25;
printf("%x %x %x %x",uninObj.chrArray[0], uninObj.chrArray[1], /
uninObj.chrArray[2], uninObj.chrArray[3]);
printf("\n Float in hex: %x",uninObj.flotVal);
return 0;
Upvotes: 0
Views: 2182
Reputation: 123448
First of all, this statement is flat-out wrong:
printf("\n Float in hex: %x",uninObj.flotVal);
%x
expects its corresponding argument to be unsigned int
, and passing an argument of a different type (as you do here) results in undefined behavior - the output could literally be anything.
As of C99, you can use the %a
or %A
specifiers to print a hexadecimal representation of the floating-point value ([+|-]x.xxxx...
, where each x
is a hex digit). This is not the same thing as the value's binary representation in memory, though.
For the union, I'd suggest you use unsigned char
for chrArray
and use %hhx
to print out each byte:
union unin
{
unsigned char chrArray[sizeof (float)];
float flotVal;
}uninObj;
printf("%hhx %hhx %hhx %hhx",uninObj.chrArray[0], uninObj.chrArray[1],
uninObj.chrArray[2], uninObj.chrArray[3]);
The hh
length modifier in %hhx
says that the corresponding argument has type unsigned char
, rather than the unsigned int
%x
usually expects.
Upvotes: 0
Reputation: 3807
On my CentOS 7 Intel 64 architecture, sizeof(float) is 4. So little endian result that I see on my test of 00 00 C8 C1 is a negative number. The Intel single precision floating point representation is:
1 bit sign
8 bit exponent
23 bit significand (with the first 1 bit implied)
As the Intel architecture is little-endian, the floating point value for 00 00 C8 C1 is 1100 0001 1100 1000 0000 0000 0000 0000
. The first 1 means the number is negative. The next 8 bits, 10000011
(Decimal 131), are the exponent, and the next 4 bits 1001
, with the implied 1 bit 11001
, is the number 25 shifted right 4 bits. The exponent of 131 is offset from 127 by 4, which is the number of bits that 1.1001
is shifted left to get back to 25.
On a 64 bit representation, the exponent is 11 bits, and the exponent offset is 1023. So you would expect the number to be 1 (negative sign), Decimal 1027 in 11 bits 100 0000 0011
, then 25 decimal as 1001
with the implied leading 1 bit (as in the single precision version), then all zeroes which together is C0 39 00 00 , 00 00 00 00
. You can see that the last 4 bytes are all zeros. But this is still little-endian, so as a 64 bit number it would look like 00 00 00 00 00 00 39 C0
. So you are getting all zeros if you print the first 4 bytes.
You would see non-zero values from your program either by (a) Specifying an 8 character array in the declaration and printing all 8 (and you would see two bytes with 39 C0), or (b) using a value other than -25 in your test that requires more binary digits to represent like a large prime number or an irrational number (as suggested by @David C. Rankin).
Checking sizeof(float) would determine what your floating point size (in bytes) and I would expect you to see it as 8, because you are seeing zeroes and not C8 C1
like I do.
Upvotes: 1
Reputation: 34584
Passing float
to a specifier expecting unsigned int
is undefined behaviour.
Furthermore, the unsigned int
expected by %x
is not guaranteed to be the same size as float
. So the attempt to trick printf
that way may or may not "work".
Anyway, for a variadic function such as printf
the compiler will promote an argument of type float
to double
, so that may be the reason why you (and I) get 0
output.
On my system
sizeof(double)
is 8
.
sizeof(unsigned int)
is 4
.
And if you look at the bytes output for that part of the union
, the first two are 0
. So having passed 8 bytes to the function instead of the 4 expected by %x
, the data is aligned in a way that %x
is getting four 0
value bytes.
Upvotes: 4