Reputation: 9426
I was given a floating point variable and wanted to know what its byte representation is. So I went to IDEOne and wrote a simple program to do so. However, to my surprise, it causes a runtime error:
#include <stdio.h>
#include <assert.h>
int main()
{
// These are their sizes here. So just to prove it.
assert(sizeof(char) == 1);
assert(sizeof(short) == 2);
assert(sizeof(float) == 4);
// Little endian
union {
short s;
char c[2];
} endian;
endian.s = 0x00FF; // would be stored as FF 00 on little
assert((char)endian.c[0] == (char)0xFF);
assert((char)endian.c[1] == (char)0x00);
union {
float f;
char c[4];
} var;
var.f = 0.0003401360590942204;
printf("%x %x %x %x", var.c[3], var.c[2], var.c[1], var.c[0]); // little endian
}
On IDEOne, it outputs:
39 ffffffb2 54 4a
along with a runtime error. Why is there a runtime error and why is the b2
actually ffffffb2
? My guess with the b2
is sign extension.
Upvotes: 0
Views: 334
Reputation: 477454
Your approach is all kinds of wrong. Here's how you print a general object's binary representation:
template <typename T>
void hexdump(T const & x)
{
unsigned char const * p = reinterpret_cast<unsigned char const *>(&x);
for (std::size_t i = 0; i != sizeof(T); ++i)
{
std::printf("%02X", p[i]);
}
}
The upshot is that you can always interpret any object as a character array and thus reveal its representation.
Upvotes: 2
Reputation: 10667
Replace char
by unsigned char
in the struct
and add a return 0;
at the end fixes all the problems: http://ideone.com/ienG2b.
Upvotes: 5
Reputation:
char
is a signed type. If it's 8 bits long and you put anything greater than 127 in it, it will overflow. Signed integer overflow is undefined behavior, so is printing a signed value using a conversion specifier that expects an unsigned one (%x
expects unsigned int
, but char
is promoted [implicitly converted] to signed int
when passed to the variadic printf()
function).
Bottom line - change char c[4]
to unsigned char c[4]
and it will work fine.
Upvotes: 6