Jonathan Mee
Jonathan Mee

Reputation: 38919

Printing out the Internal Hex of a float

So I thought that I could print the internal hex of a float like this:

const auto foo = 13.0F;
const auto bar = reinterpret_cast<const unsigned char*>(&foo);

printf("0x%02hhX%02hhX%02hhX%02hhX\n", bar[0], bar[1], bar[2], bar[3]);

This outputs:

0x00005041

But when I look at the debugger, the hex that it is reporting for foo is:

0x003EF830

Can someone help me understand why this didn't work and what I'd need to do to make it work?

Upvotes: 3

Views: 189

Answers (2)

BJovke
BJovke

Reputation: 1983

You almost got it. I'm not sure what the debugger is displaying but it's not your float. Maybe it's the address of it?

The actual float value is 0x41500000. You can check it here: IEEE-754 Floating Point Converter. If the link doesn't work you'll have to find the online floating point analyzer/description by yourself.

You did it the right way but you forgot that Intel x86 CPUs (I suppose that your CPU is Intel x86) are little endian. That means that higher significance bytes are on higher memory addresses. So you should print out bytes in the reverse order than you're printing them, like this:

printf("0x%02hhX%02hhX%02hhX%02hhX\n", bar[3], bar[2], bar[1], bar[0]);

Upvotes: 5

Mike Vine
Mike Vine

Reputation: 9837

The hex value for the floating point value 13.0f is:

0x41500000

Confirmation:

Take 0x41500000 which is in binary:

0100,0001,0101,0000,0000,0000,0000,0000

which split into 1 bit sign, 8 bit exponent and 23bit mantissa is:

0 : 10000010 : 10100000000000000000000

And we can interpret this using the standard floating evaluation of:

-1^s x 1.<mantissa> x 2^(exponent-127)

as

(-1)^0 x 1.101 x 2^(10000010-1111111)

which is

1.101b x (10b)^11b

which is

1.625 x 8

which is

13.0f

Which means your output is correct (assuming a little endian format)

How you are reading it in the debugger is incorrect

Upvotes: 5

Related Questions