Reputation: 9
In the first session of algorithm class , the professor asked us to spot the errors and explain the reason behind the letter a shown as a result . The answer has something with ASCII code but I did not get it
#include<stdio.h>
int main()
{
char var=353;
printf("%c",var);
return 0;
}
`
Upvotes: 0
Views: 250
Reputation: 6846
It helps to view things in hex sometimes. The decimal value 353 is hex 0x161. Since a char
is an 8-bit value, that means it can only hold numbers up to 0xff, or decimal 255 (unsigned) or 127 (signed). Well, clearly 0x161 can't fit into a data type that can only count to 0xff, so the compiler truncated the upper bits to make it fit. The result was the value ended up being 0x61, which is decimal 97, which is the letter a
in ASCII.
Upvotes: 0
Reputation: 781848
If char
defaults to unsigned
in your implementation, and char
is 8 bits, then
char var = 353;
is equivalent to
char var = (353 % 256);
and the value of that modulus expression is 97
. That's the ASCII code for a
.
If char
defaults to signed char
, the code produces implementation-defined behavior because 353 is too large. If you're still getting a
, it's because the implementation happens to be using the same modular arithmetic when signed integer overflow occurs, which is common. But you shouldn't depend on it, since it's implementation-specific.
Upvotes: 2