Reputation:
What I know is - UNSIGNED INT
cannot take negative values.
If I take the maximum value
of an UNSIGNED INT
and increment it, I should get ZERO
i.e. the minimum value
and if I take the minimum value
and decrement it, I should get the maximum value
.
Then, why is this happening ?
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
int main()
{
unsigned int ui;
ui = UINT_MAX;
ui ++;
printf("ui = %d", ui);
ui = 0;
ui --;
printf("\n");
printf("ui = %d", ui);
return EXIT_SUCCESS;
}
Output:
ui = 0
ui = -1
Upvotes: 0
Views: 162
Reputation: 510
From 'man 3 printf':
d, i The int argument is converted to signed decimal notation
So, although the type of ui
is unsigned int
, printf
is interpreting it as a signed int
and showing it as such.
Upvotes: 3
Reputation: 13690
You pass the value to an ellipsis function (printf
). You should expect nothing about the signedness here.
The %d
in the format string controls the sign of the displayed value. There is a cast inside the printf
function since you selected the %d
. That's why you see a signed
value that is equivalent to the binary value FFFFFFFF1.
1 Assuming a 32 bit width for integer.
Upvotes: 1
Reputation: 2856
That is because you using %d
format specifier that says printf
to treat your number as a signed integer.
Try using %u
to output unsigned value and you get the desired result.
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
int main()
{
unsigned int ui;
ui = UINT_MAX;
ui ++;
printf("ui = %u", ui);
ui = 0;
ui --;
printf("\n");
printf("ui = %u", ui);
return EXIT_SUCCESS;
}
output:
ui = 0
ui = 4294967295
Check out the reference on possible format specifiers.
Upvotes: 2