Reputation: 395
Two's complements is set to make it easier for computer to compute the substraction of two numbers. But how computer distinguish an integer is signed integer or unsigned integer? It's just 0 and 1 in its memory.
For exmaple, 1111 1111
in the computer memory may represent number 255 but also can represent -1.
Upvotes: 18
Views: 5985
Reputation: 91
Let's understand it at the C programming level:
int A; // Computer will allocate 32 bits in memory & name the memory block as A
unsigned int B; // Computer will allocate 32 bits in memory & name the memory block as B
A = 5; // Computer will calculate its 2's complement and save 0000….000101 to memory
B = -5; // Computer will calculate its 2's complement and save 1111….111011 to memory
Format specifier = %d
:
Stored number -> Calculate 2’s complement -> Convert to decimal -> Print
Format specifier = %u
:
Stored number -> Convert to decimal -> Print
printf(“A = %d”,A); // Computer will interpret the data as a signed number and calculate accordingly
printf(“B = %u”,B); // Computer will interpret the data as unsigned number
So in the computer, every data is stored in 2’s complement form only. But when it comes to printing it, the data is interpreted based on the format specifier/conversion operator given by the programmer.
If both int
and unsigned int
data types did the same work of "simply creating a named memory block of 32 bits" then, why do we need two different data types?
While printing data, the programmer will find it helpful to see the data type declaration and hence remember in which form this data is to be printed and by using which format specifier.
Upvotes: 1
Reputation: 17757
It does not distinguish them. But with the complement, the computation is the same :
Below, d
will be appended to decimal numbers, and b
to binary numbers.
Computations will be on 8 bits integers.
-1d + 1d = 1111 1111b + 1b = 1 0000 0000b
But since we overflowed (yep, that's 8 0
s and a 1
on a 8 bits integers), the result is equal to 0.
-2d + 1d = 1111 1110b + 1b = 1111 1111b = -1d
-1d + 2d = 1111 1111b + 10b = 1 0000 0001b (this overflows) = 1b = 1d
-1d + -1d = 1111 1111b + 1111 1111b = 1 1111 1110b (this overflows) = 1111 1110b = -2d
And if you consider these operations on unsigned (binary values will be unchanged) :
255d + 1d = 1111 1111b + 1b = 1 0000 0000b (this overflows) = 0d
254d + 1d = 1111 1110b + 1b = 1111 1111b = 255d
255d + 2d = 1111 1111b + 10b = 1 0000 0001b (this overflows) = 1b = 1d
255d + 255d = 1111 1111b + 1111 1111b = 1 1111 1110b (this overflows) = 1111 1110b = 2d
Unsigned versus signed is thus just a visual representation of unsigned, only used when displaying to a human :-)
Upvotes: 6
Reputation: 75599
Signed and unsigned use the same data, but different instructions.
The computer stores signed and unsigned integers as the same data. I.e. 255 and -1 are the same bits. However, you tell the compiler what type the variable has. If it is signed, the compiler uses signed operators for manipulating the variables (e.g. IDIV) and when unsigned, it uses another instruction (e.g. DIV). So the compiler makes a program which tells the CPU how to interpret the data.
Upvotes: 21