Reputation: 65
Could anyone help me understand the difference between signed/unsigned int, as well as signed/unsigned char? In this case, if it's unsigned wouldn't the value just never reach a negative number and continue on an infinite loop of 0's?
int main()
{
unsigned int n=3;
while (n>=0)
{
printf ("%d",n);
n=n-1;
}
return 0;
}
Upvotes: 0
Views: 1029
Reputation: 14151
In this case, if it's unsigned wouldn't the value just never reach a negative number ...?
You are right. But in the statement printf ("%d",n);
you “deceived” the printf()
function — using the type conversion specifier d
— that the number in variable n
is signed.
Use the type conversion specifier u
instead: printf ("%u",n);
... never reach a negative number and continue on an infinite loop of 0's?
No. “Never reaching a negative number” is not the same as “stopping at 0 and resist further decrementing”.
Other people already explained this. Here is my explanation, in the form of analogies:
Imagine yourself a never ending and never beginning sequence of non-negative integers:
..., 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, ... // the biggest is 3 only for simplicity
— or numbers on an analog clock:
You may increase / decrease a number forever, going round and round.
Upvotes: 0
Reputation: 47962
Two important things:
At one level, the difference between regular signed
, versus unsigned
values, is just the way we interpret the bits. If we limit ourselves to 3 bits, we have:
bits | signed | unsigned |
---|---|---|
000 | 0 | 0 |
001 | 1 | 1 |
010 | 2 | 2 |
011 | 3 | 3 |
100 | -4 | 4 |
101 | -3 | 5 |
110 | -2 | 6 |
111 | -1 | 7 |
The bit patterns don't change, it's just a matter of interpretation whether we have them represent nonnegative integers from 0 to 2N-1, or signed integers from -2N/2 to 2N/2-1.
The other important thing to know is what operations are defined on a type. For unsigned types, addition and subtraction are defined so that they "wrap around" from 0 to 2N-1. But for signed types, overflow and underflow are undefined. (On some machines they wrap around, but not all.)
Finally, there's the issue of properly matching up your printf
formats. For %d
, you're supposed to give it a signed integer. But you gave it unsigned
instead. Strictly speaking, that results in undefined behavior, too, but in this case (and not too suprisingly), what happened was that it took the same bit pattern and printed it out as if it were signed, rather than unsigned.
Upvotes: 2
Reputation: 11
Signed number representation is the categorization of positive as well as negative integers while unsigned categorizations are classifications of positive integersو and the code you wrote will run forever because n is an unsigned number and always represents a positive number.
Upvotes: 1
Reputation: 8141
The terms signed and unsigned refer to how the CPU treats sequences of bits.
There are 2 important things to understand here:
Let's start with (1).
Let's take 4-bit nibbles for example.
If we ask the CPU to add 0001
and 0001
, the result should be 2, or 0010
.
But if we ask it to add 1111
and 0001
, the result should be 16, or 10000
. But it only has 4 bits to contain the result. The convention is to wrap-around, or circle, back to 0, effectively ignoring the Most Significant Bit. See also integer overflow..
Why is this relevant? Because it produces an interesting result. That is, according to the definition above, if we let x = 1111
, then we get x + 1 = 0
. Well, x
, or 1111
, now looks and behaves awfully like -1
. This is the birth of signed numbers and operations. And if 1111
can be deemed as -1
, then 1111 - 1 = 1110
should be -2
, and so on.
Now let's look at (2).
When the C compiler sees you defining an unsigned int
, it will use special CPU instructions for dealing with unsigned numbers, where it deems relevant. For example, this is relevant in jump
instructions, where the CPU needs to know if you mean it to jump way forward, or slightly backward. For this it needs to know if you mean your operand to be interpreted in a signed, or unsigned way.
The operation of adding two numbers, on the other hand, is fundamentally oblivious to the consequent interpretation. The only thing is that the CPU will turn on a special flag after an addition operation, to tell you whether a wrap-around has occurred, for your own auditing.
But the important thing to understand is that the sequence of bits doesn't change; only its interpretation.
To tie all of this to your example, subtracting 1
from an unsigned 0
will simply wrap-around back to 1111
, or 2^32 in your case.
Finally, there are other uses for signed/unsigned. For example, by the very fact it is defined as a different type, this allows functions to be written that define a contract where only unsigned integers, let's say, can be passed to it. Also, it's relevant when you want to display or print the number.
Upvotes: -1
Reputation: 213960
wouldn't the value just never reach a negative number
Correct, it can't be negative.
and continue on an infinite loop of 0's
No, it will wrap-around from zero to the largest value of an unsigned int
, which is well-defined behavior. If you use the correct conversion specifier %u
instead of the incorrect %d
, you'll notice this output:
3
2
1
0
4294967295
4294967294
...
Upvotes: 1