Reputation: 9
I have been trying to understand the integer overflow in C-programming. I am confused about whether the final value output depends on the initial datatype given to the variable during declaration or the format specifier used to print the variable.
`
int main() {
short int a = 32771;
printf("%d\n", a); // O/P : -32765
printf("%u\n\n", a); // O/P : 4294934531
int b = 32771;
printf("%hd\n", b); // O/P : -32765
printf("%hu", b); // o/p : 32771
return 0;
}
`
a is declared a short-integer at the very start, but initialized with a value that overflows a short-integer range. The printf("%d\n", a) statement gives output considering a as a signed short-integer (2 byte or 16 bits), whereas the printf("%u\n\n", a) statement gives output considering it as an unsigned integer (4 bytes or 32 bits)
b is declared a integer (4 bytes or 32 bits) at the very start, and initialized with a value well within the integer range. The printf("%hd\n", b) statement gives output considering b as a signed short-integer (2 byte or 16 bits), whereas the printf("%hu", b) statement gives output considering it as an unsigned short-integer.
Please explain this discrepancy. What exactly determines the final output value?
Upvotes: 0
Views: 131
Reputation: 222302
short int a = 32771;
In an initialization, the initial value, 32,771, is converted to the type of the object being initialized, short int
, per C 2018 6.7.9 11 (paragraph 11 of clause 6.7.9 of the 2018 C standard) and 6.5.16 2.
In your C implementation, short int
is 16 bits and can represent values from −32,768 to +32,767. So it cannot represent 32,771. When a value of integer type is converted to a signed integer type and cannot be represented in the new type, the result is implementation-defined or an implementation-defined signal is raised, per 6.3.1.3 3. This means the C implementation is required to document what it does for this. A common behavior is to wrap the result modulo 216, so the result is 32,771 − 65,536 = −32,765.
printf("%d\n", a);
printf
is declared with ...
for arguments after its first. In such a call, the default argument promotions are performed on the trailing arguments, per 6.5.2.2 6. These promotions promote a short int
to an int
. The %d
in the format string tells printf
to expect an int
, so it gets that int
, −32,765, and prints it.
printf("%u\n\n", a);
Again a
is promoted to int
. However, %u
tells printf
to expect an unsigned int
. When the types do not match, the behavior is not defined by the C standard, per 7.21.6.3 2 and 7.21.6.1 9. However, it is common that printf
will reinterpret the bits of an int
as a representation of an unsigned int
, which is 32 bits in your C implementation. When a C implementation uses two’s complement, the bits that represent −32,765 in a 32-bit int
are FFFF800316. Interpreted as an unsigned int
, these bits represent 4,294,934,531, so that is what is printed.
printf("%hd\n", b);
%hd
tells printf
to expect an int
but to convert its value to short int
before printing, per 7.21.6.1 7. So the conversion of 32,7761 to short int
produces −32,765, and that is what is printed.
printf("%hu", b);
%hu
tells printf
to expect an int
but to convert its value to unsigned short int
before printing, per 7.21.6.1 7. (This can vary in other C implementations; printf
will expect whatever type an unsigned short int
is promoted to by the integer promotions in 6.3.1.1 2, which is int
in your C implementation but could be unsigned int
.) Since 32,771 can be represented in unsigned short int
, the conversion does not change the value, and that is what is printed.
Upvotes: 0
Reputation: 1
The type of the variable is determines when you create it.
What append is when printf()
pars the const char * format
he find %u
in the string.
That say to him, he need to print a unsigned int
, and he gonna read it with a va_arg()
call. (see man)
And he print the unsigned int
he create, with a write()
call.
Just for info :
An overflow condition may give results leading to unintended behavior. In particular, if the possibility has not been anticipated, overflow can compromise a program's reliability and security.
If you want to dig more about bits and overflow you can print bits :
#include <stdio.h>
#include <stddef.h>
#include <stdint.h>
void print_bits(short int x)
{
printf("%d : \n", x);
for (int16_t i = sizeof(x) * 8 - 1 ; i >= 0; i--)
{
printf("%d", (x >> i) & 1);
if (i % 8 == 0)
printf(" ");
}
printf("\n");
}
int main(void)
{
short int a = 32767;
print_bits(a);
a++;
print_bits(a);
return (0);
}
That give us something like that :
➜ RTFM ./a.out
32767 :
01111111 11111111
-32768 :
10000000 00000000
Upvotes: -4