Reputation: 501
The following code when executed in eclipse, running on Ubuntu and compiled using g++ compiler, provides unexpected results.
#include <iostream>
int main()
{
unsigned int a=5555;
std::cout << (unsigned int)(((char*)&a)[0]) << "\n";
std::cout << (unsigned int)(((char*)&a)[1]) << "\n";
std::cout << (unsigned int)(((char*)&a)[2]) << "\n";
std::cout << (unsigned int)(((char*)&a)[3]) << "\n";
return 0;
}
I am trying to treat the variable a
as an array of integers each of one byte size. When I execute the program, this is what I get as output:
4294967219
21
0
0
Why is the first value displayed so large (here int
is of size 32 bits or 4 bytes). So each of the output values should obviously be no greater than 255 right? And why are the last three values zero? Or why I am getting the wrong result?
I also got the same result when tested in code::blocks, running the same compiler.
Upvotes: 1
Views: 67
Reputation: 25855
This is because of sign-extension. Let's look at your unsigned int a
in memory:
b3 15 00 00
When you cast the first byte from a signed char
to an unsigned int
, the cast from char
to int
happens before the conversion from signed
to unsigned
, and therefore, the sign bit is extended, and the result (0xffffffb3
) is what you see on your first line.
Try casting to an unsigned char *
instead of a char *
.
Upvotes: 5
Reputation: 118425
This because char
is a signed integer type.
Decimal 5555 is hexadecimal 0x15b3
.
The 0xb3
byte, when sign-extended to a signed int
, becomes 0xffffffb3
.
0xffffffb3
interpreted as an unsigned int
is 4294967219 in decimal.
Upvotes: 6