Plar625
Plar625

Reputation: 75

How the char data type is being distinguished from signed or unsigned?

I am currently in the process of learning C with the K&R's book - 2nd edition.

Until now I've understood that there are 3 types of char data type (8 bit): plain char which is mostly signed by default but could be unsigned depending the platform, unsigned char and signed char.

The bit pattern of -1 in signed char is 11111111 and the hex is 0xFF.

The bit pattern of 255 in unsigned char is 11111111 and the hex is 0xFF.

So, both are the same? There is no sign bit to represent that it's either signed or unsigned? My question is how it's being distinguished one from another? I am obviously missing something here but what? :-)

For int (4 bytes) there is a similar example:

signed int value of 255 is represented in the bit pattern as 00000000 00000000 00000000 11111111

unsigned int value of 255 is represented in the bit pattern as 00000000 00000000 00000000 11111111

Again both are the same. So, how the system will find out that it's either signed or unsigned int data type?

Upvotes: 2

Views: 113

Answers (1)

ShadowRanger
ShadowRanger

Reputation: 155418

There is no difference except in how they're used. When you tell your compiler that a variable is signed or unsigned, it knows to use signed or unsigned instructions when performing mathematical operations on it. When you use printf, you explicitly provide format codes that tell the function whether the argument is signed or unsigned (e.g. %u vs. %d). By the time the program is running, yep, just looking at the registers and memory, you can't tell the difference between -1 (as a signed char) and 255 (as an unsigned char) on systems with CHAR_BIT == 8 and two's complement math, but the program has baked that knowledge into how it works with the otherwise indistinguishable values.

It's the same way that a pointer width 0 and NULL behave (or any other equivalent number and pointer pattern); they're both just a bunch of zero bits, but numeric zero is manipulated and used as a number, while NULL is manipulated as a pointer. The bits are the same, the way the compiler and APIs use them is different.

Upvotes: 5

Related Questions