atul_pant
atul_pant

Reputation: 99

typecasting unsigned char and signed char to int in C

int main()
{
  char ch1 = 128;
  unsigned char ch2 = 128;
  printf("%d\n", (int)ch1);
  printf("%d\n", (int)ch2); 
}

The first printf statement outputs -128 and second 128. According to me both ch1 and ch2 will have same binary representation of the number stored: 10000000. So when I typecast both the values to integers how they end up being different value?

Upvotes: 1

Views: 1094

Answers (4)

Vlad from Moscow
Vlad from Moscow

Reputation: 310980

For starters these castings

printf("%d\n", (int)ch1);
printf("%d\n", (int)ch2);

are redundant. You could just write

printf("%d\n", ch1);
printf("%d\n", ch2);

because due to the default argument promotions integer types with the rank that is less than the rank of the type int are promoted to the type int if an object of this type can represent the value stored in an object of an integer type with less rank.

The type char can behave either as the type signed char or unsigned char depending on compiler options.

From the C Standard (5.2.4.2.1 Sizes of integer types <limits.h>)

2 If the value of an object of type char is treated as a signed integer when used in an expression, the value of CHAR_MIN shall be the same as that of SCHAR_MIN and the value of CHAR_MAX shall be the same as that of SCHAR_MAX. Otherwise, the value of CHAR_MIN shall be 0 and the value of CHAR_MAX shall be the same as that of UCHAR_MAX. 20) The value UCHAR_MAX shall equal 2CHAR_BIT − 1.

So it seems by default the used compiler treats the type char as signed char.

As a result in the first declaration

char ch1 = 128;  
unsigned char ch2 = 128;

the internal representation 0x80 of the value 128 was interpreted as a signed value because the sign bit is set. And this value is equal to -128.

So you got that the first call of printf outputted the value -128

printf("%d\n", (int)ch1);

while the second call of printf where there is used an object of the type unsigned char

printf("%d\n", (int)ch2);

outputted the value 128.

Upvotes: 1

R.. GitHub STOP HELPING ICE
R.. GitHub STOP HELPING ICE

Reputation: 215259

Your fundamental error here is a misunderstanding of what a cast (or any conversion) does in C. It does not reinterpret bits. It's purely an operation on values.

Assuming plain char is signed, ch1 has value -128 and ch2 has value 128. Both -128 and 128 are representable in int, and therefore the cast does not change their value. (Moreover, writing it is redundant since the default promotions automatically convert variadic arguments of types lower-rank than int up to int.) Conversions can only change the value of an expression when the original value is not representable in the destination type.

Upvotes: 1

diatrevolo
diatrevolo

Reputation: 2822

An unsigned char can have a value of 0 to 255. A signed char can have a value of -128 to 127. Setting a signed char to 128 in your compiler probably wrapped around to the lowest possible value, which is -128.

Upvotes: 1

Youssef13
Youssef13

Reputation: 4954

First of all, a char can be signed or unsigned and that depends on the compiler implementation. But, as you got different results. Then, your compiler treats char as signed.

A signed char can only hold values from -128 to 127. So, a value of 128 for signed char overflows to -128.

But an unsigned char can hold values from 0 to 255. So, a value of 128 remains the same.

Upvotes: 8

Related Questions