Reputation: 1613
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv){
char i = -128;
int j = i;
printf("%d %u\n", j, j);
return 0;
}
result is -128 4294967168
what I think is
i: 10000000
and after the assignment operator, do the sign extension
j: 11111111 11111111 11111111 10000000
what I want to ask is how printf("%d",j)
know to print -128
just use
the last byte? How it works?
Thx!
Upvotes: 0
Views: 430
Reputation: 183978
what I want to ask is how printf("%d",j) know to print -128 just use the last byte?
It doesn't. It is told to print a signed int
, so it takes the appropriate number of bytes -typically 4 nowadays - from the stack and interprets that bit pattern as a signed int
.
When you assign a negative char
to an int
variable, as in int j = i;
here, what happens is not really sign-extension, but - since all values a char
can represent are also representable as an int
- a value-preserving conversion, the char
i
is converted to an int
with the same value.
On two's complement machines, which are by far the most common nowadays, and also in ones' complement, that value-preserving conversion happens to coincide with sign-extension, but if the representaion is sign-and magnitude, the conversion would be different.
Since -128 isn't representable as a signed eight-bit integer in ones' complement or sign-and magnitude, let's look at what happens to the bit-patterns when converting -127 to a 32-bit signed integer with the same kind of representation:
Two's complement:
10000001 -> 11111111 11111111 11111111 10000001
Ones' complement:
10000000 -> 11111111 11111111 11111111 10000000
Sign-and-magnitude:
11111111 -> 10000000 00000000 00000000 01111111
Upvotes: 6
Reputation: 755074
Here's a simple test program that may help you understand what's going on. Note that it uses the C99 length modifier hh
, which means:
hh
Specifies that a followingd
,i
,o
,u
,x
, orX
conversion specifier applies to asigned char
orunsigned char
argument (the argument will have been promoted according to the integer promotions, but its value shall be converted tosigned char
orunsigned char
before printing); or that a followingn
conversion specifier applies to a pointer to asigned char
argument.
This may help you understand how the types are handled.
#include <stdio.h>
#include <limits.h>
static void print_value(signed char sc, unsigned char uc, /*plain*/ char pc)
{
int j1 = sc;
int j2 = uc;
int j3 = pc;
printf("%-9s %4hhd %4hhu %4d 0x%hhX %10u\n", "Signed:", j1, j1, j1, j1, j1);
printf("%-9s %4hhd %4hhu %4d 0x%hhX %10u\n", "Unsigned:", j2, j2, j2, j2, j2);
printf("%-9s %4hhd %4hhu %4d 0x%hhX %10u\n", "Plain:", j3, j3, j3, j3, j3);
}
static void check_value(int i)
{
signed char sc = i;
unsigned char uc = i;
/*plain*/ char pc = i;
print_value(sc, uc, pc);
}
int main(void)
{
for (int i = 0; i <= 3; i++)
check_value(i);
for (int i = SCHAR_MAX - 3; i <= SCHAR_MAX+3; i++)
check_value(i);
for (int i = UCHAR_MAX - 3; i <= UCHAR_MAX; i++)
check_value(i);
return 0;
}
Compiled with -fsigned-char
(so 'plain' char
is a signed type), the output is:
Signed: 0 0 0 0x0 0
Unsigned: 0 0 0 0x0 0
Plain: 0 0 0 0x0 0
Signed: 1 1 1 0x1 1
Unsigned: 1 1 1 0x1 1
Plain: 1 1 1 0x1 1
Signed: 2 2 2 0x2 2
Unsigned: 2 2 2 0x2 2
Plain: 2 2 2 0x2 2
Signed: 3 3 3 0x3 3
Unsigned: 3 3 3 0x3 3
Plain: 3 3 3 0x3 3
Signed: 124 124 124 0x7C 124
Unsigned: 124 124 124 0x7C 124
Plain: 124 124 124 0x7C 124
Signed: 125 125 125 0x7D 125
Unsigned: 125 125 125 0x7D 125
Plain: 125 125 125 0x7D 125
Signed: 126 126 126 0x7E 126
Unsigned: 126 126 126 0x7E 126
Plain: 126 126 126 0x7E 126
Signed: 127 127 127 0x7F 127
Unsigned: 127 127 127 0x7F 127
Plain: 127 127 127 0x7F 127
Signed: -128 128 -128 0x80 4294967168
Unsigned: -128 128 128 0x80 128
Plain: -128 128 -128 0x80 4294967168
Signed: -127 129 -127 0x81 4294967169
Unsigned: -127 129 129 0x81 129
Plain: -127 129 -127 0x81 4294967169
Signed: -126 130 -126 0x82 4294967170
Unsigned: -126 130 130 0x82 130
Plain: -126 130 -126 0x82 4294967170
Signed: -4 252 -4 0xFC 4294967292
Unsigned: -4 252 252 0xFC 252
Plain: -4 252 -4 0xFC 4294967292
Signed: -3 253 -3 0xFD 4294967293
Unsigned: -3 253 253 0xFD 253
Plain: -3 253 -3 0xFD 4294967293
Signed: -2 254 -2 0xFE 4294967294
Unsigned: -2 254 254 0xFE 254
Plain: -2 254 -2 0xFE 4294967294
Signed: -1 255 -1 0xFF 4294967295
Unsigned: -1 255 255 0xFF 255
Plain: -1 255 -1 0xFF 4294967295
Compiled with -funsigned-char(so 'plain'
char` is an unsigned type), the output is:
Signed: 0 0 0 0x0 0
Unsigned: 0 0 0 0x0 0
Plain: 0 0 0 0x0 0
Signed: 1 1 1 0x1 1
Unsigned: 1 1 1 0x1 1
Plain: 1 1 1 0x1 1
Signed: 2 2 2 0x2 2
Unsigned: 2 2 2 0x2 2
Plain: 2 2 2 0x2 2
Signed: 3 3 3 0x3 3
Unsigned: 3 3 3 0x3 3
Plain: 3 3 3 0x3 3
Signed: 124 124 124 0x7C 124
Unsigned: 124 124 124 0x7C 124
Plain: 124 124 124 0x7C 124
Signed: 125 125 125 0x7D 125
Unsigned: 125 125 125 0x7D 125
Plain: 125 125 125 0x7D 125
Signed: 126 126 126 0x7E 126
Unsigned: 126 126 126 0x7E 126
Plain: 126 126 126 0x7E 126
Signed: 127 127 127 0x7F 127
Unsigned: 127 127 127 0x7F 127
Plain: 127 127 127 0x7F 127
Signed: -128 128 -128 0x80 4294967168
Unsigned: -128 128 128 0x80 128
Plain: -128 128 128 0x80 128
Signed: -127 129 -127 0x81 4294967169
Unsigned: -127 129 129 0x81 129
Plain: -127 129 129 0x81 129
Signed: -126 130 -126 0x82 4294967170
Unsigned: -126 130 130 0x82 130
Plain: -126 130 130 0x82 130
Signed: -4 252 -4 0xFC 4294967292
Unsigned: -4 252 252 0xFC 252
Plain: -4 252 252 0xFC 252
Signed: -3 253 -3 0xFD 4294967293
Unsigned: -3 253 253 0xFD 253
Plain: -3 253 253 0xFD 253
Signed: -2 254 -2 0xFE 4294967294
Unsigned: -2 254 254 0xFE 254
Plain: -2 254 254 0xFE 254
Signed: -1 255 -1 0xFF 4294967295
Unsigned: -1 255 255 0xFF 255
Plain: -1 255 255 0xFF 255
Compiled with GCC 4.7.1 on Mac OS X 10.7.4 (but using the standard C libraries on the platform).
Upvotes: 1
Reputation: 215627
Your program is invoking undefined behavior by passing the wrong type to printf
. %u
expects an unsigned
argument but you passed a (signed) int
. printf
does not "know what to do" because it doesn't have to do anything in particular; it's free to do whatever happens because you invoked UB.
Upvotes: 1
Reputation: 15813
you force a cast to see the first byte of an integer:
char j = -128;
printf("%d", (char) j);
to see the second byte as decimal, you force a cast either:
printf("%d", *(((char *) &j) + 1 ) );
last byte of an integer:
printf("%d", *(((char *) &j) + 3 ) );
Upvotes: 1
Reputation: 21675
It just prints the unsigned version of -128 (of an int in this case).
Upvotes: 2