Reputation: 2190
I'm following an example from Stroustrups C_++ 4th Ed. Page 143, where there is Errata. -160, should be -140.
Specifically the conversion of a signed int to a signed char.
#include <iostream>
using namespace std;
int main(int argc, char *argv[])
{
signed char sc = -140;
cout << "signed int = " <<
int((signed int) (0b11111111111111111111111101110100))
<< endl; // -140, 4 bytes
cout << "signed int & 0xFF = " << int((signed int) -140 & 0xFF)
<< endl; // 116, narrow 2 bytes
unsigned char uc = sc; // 256 - 140 = 116
return 0;
}
I understand how the conversion is a narrowing of 4 bytes, to 2 bytes which result in -140 being converted to 116. I am confused on the comment Stroustrup adds for the line unsigned char uc = sc
where it is // 256 - 140 = 116
. I see it results in the correct answer 116, but am unsure how that conversion is done. I'm aware that an unsigned char max limit completely full of 1's is 255 or 2^8 - 1. Does anyone know why this math in the comment works?
UPDATE: The solution is -140 mod 256 = 116.
Thanks
Upvotes: 1
Views: 1420
Reputation: 308520
Two byte int
hasn't been common for a long time now; most int
are 4 bytes.
Regardless, it's the top bit that matters for a signed number. The expression -140 & 0xff
results in 0b01110100
which is 116, because all those sign bits get chopped off.
To make it even simpler, let's look at the full bit patterns.
-140 = 0b11111111111111111111111101110100 (first sign bit is set)
0xff = 0b00000000000000000000000011111111 (bottom 8 bits set, all others zero)
& : 0b00000000000000000000000001110100 = 116
Upvotes: 1
Reputation: 96886
When converting a value to an unsigned
type, the value is taken modulo N,
where N = 2sizeof(T)*CHAR_BIT
.
In simple terms, it means that N*i
is added to the value to put it in the representable range, where i
is an integer (possibly negative).
You can always determine a single specific value of i
that puts the value into the representable range. In your case, i = 1
, so, 256 * 1
is added to the value.
Upvotes: 2