Reputation: 60
When converting from a small int (negative value) to a larger uint what should the value be? Example:
int8_t small_int = 0x80;
uint16_t big_uint = (uint16_t)small_int; // result should be 0xFF80 or 0x0080?
I was expecting 0x80, from the C standard like this:
0x80 + 0xFFFF = 0x0079, then add 1 gives 0x0080.
But my simulator gives the result for big_uint as 0xFF80. I tried this for 8 => 16, 16 => 32 and 32 => 64. All come out the same (i.e. sign extended) Is this because the compiler actually does this:
uint16_t big_uint = (uint16_t)((int16_t)small_int);
If so could someone clarify what 6.3.1.3 part 2 actually means with a simple example please?
Upvotes: 1
Views: 300
Reputation: 224546
int8_t
is a two’s complement type, so the bits 10000000 in it represent −128.
For uint16_t
, one more than the maximum value that can be represented in uint16_t
is one more than 65,535 (hexadecimal FFFF), which is 65,536 (hexadecimal 10000).
−128 plus 65,536 is 65,408, which is FF80 in hexadecimal.
Backing up, though, there is a problem in your first declaration: int8_t small_int = 0x80;
. int8_t
can represent values from −128 to +127. But 0x80
represents the value 128. It does not fit. When it is used to initialize an int8_t
, a conversion is performed. Per C 2018 6.3.1.3 3, the conversion is implementation-defined (when converting an unrepresentable value to a signed integer type). It is common for compilers to wrap modulo the number of values representable in the type (256), but this is not guaranteed by the C standard. Properly, your definition should be int8_t small_int = -128;
or int8_t small_int = -0x80;
.
Upvotes: 2
Reputation: 52632
The value 0x80, stored in a signed 8 bit int, has a value of -128. So the new value is the old value -128, repeatedly adding 0x10000, giving 0xff80.
There is no intermediate conversion to int16_t. It’s not needed. The adding is done with the real mathematical values.
Upvotes: 1