Reputation: 21444
If I have the following:
char v = 32; // 0010 0000
then I do:
v << 2
the number becames negative. // 1000 0000 -128
I read the standard but it is only written:
If E1 has a signed type and nonnegative value, and E1 × 2 E2 is representable in the result type, then that is the resulting value; otherwise, the behavior is undefined.
so I don't understand if is a rule that if a bit goes on most left bit the number must begin negative.
I'm using GCC.
Upvotes: 1
Views: 692
Reputation: 73
Try using unsigned char instead char uses less bit for representing your character, by using unsigned char you avail more bits for representing your character
unsigned char var=32;
v=var<<2;
Upvotes: 0
Reputation: 37
Signed data primitives like char use two's complement(http://en.wikipedia.org/wiki/Twos_complement) to encode value. You probably are looking for is unsigned char which won't encode the value using two's complement(no negatives).
Upvotes: 1
Reputation: 33004
Left shifting it twice would give 1000 0000)2 = 128)10.
If 128 is representable in char
i.e. you're in some machine (with a supporting compiler) that provides a char
of size > 8 bits then 128 would be the value you get (since it's representable in such a type).
Otherwise, if the size of a char
is just 8 bits like most common machines, for a signed character type that uses two's complement for negative values, [-128, 127] is the representable range. You're in undefined behaviour land since it's not representable as-is in that type.
Upvotes: 1