Reputation: 33
My question is relatively simple but for some reason this bit of simple code perplexes me as to why its not outputting any errors or warnings. Why am I able to store integers in a character array??
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
char S[256];
// initialize array
int i;
for(i=0; i<256; i++) {
S[i] = i;
}
return 0;
}
Upvotes: 2
Views: 944
Reputation: 16290
This assignment is silently converting the integer to a char. This is perfectly legal in C.
S[i] = i;
The char
type is typically 8 bits and signed, and is implied to possibly contain a character. (The number of bits and signedness of the type is technically platform dependent but rarely differs in practice.) So the low 8 bits of the integer will be interpreted as a character.
Upvotes: 0
Reputation: 205
A 'char' type designates variables with representation length of 8 bits. Characters are actually "seen" by your program as integers, according to the ASCII table http://www.asciitable.com/.
When you write your for-statement:
for(i=0; i<256; i++)
S[i] = i;
the highest value assumed by i and passed to your char* S is 255, which is 0xFF (or binary for 1111 1111), is still lower than the 8 bit limit and can be successfully stored in a char variable.
Upvotes: 1
Reputation:
A char
is just a small int
. So this is completely legal:
int a = 5;
char b = a;
The only thing to watch for, really, is if the integer stores a value to large to represent in the char. The actual limits vary by platform.
Upvotes: 1
Reputation: 490108
I wasn't going to answer this, but every answer that's been posted so far is just close enough to right to be misleading in one way or another.
In C and C++, char
is a small integer type that occupies an amount of storage that the C and C++ standards agree to call a byte
--but their byte
may or may to correspond to what anybody/anything else calls a byte. It is guaranteed to be at least 8 bits, because it must be able to store values from -127 to +127, or else from 0 to 255.
There are two other types named signed char
and unsigned char
. A char
(specified as neither signed
nor unsigned
) has the same range as either signed char
or unsigned char
(but there's no guarantee/requirement about which, and many compilers support a flag to switch from one to the other). Although it has the same range as one of the other two, a plain char
is still a separate type from either of the other two (e.g., you can have a function overloaded on all three types).
As noted above, char
is required to have a range that requires at least 8 bits to store--but it can be larger if an implementation desires (though, in fact, compilers with char
larger than 8 bits are actually pretty unusual).
When you assign a value like 1
with type integer to a char
, the value is converted (if possible) to the same value represented as a char
. If it can't be represented, the conversion will depend on whether a char
is signed or unsigned. If it's unsigned, then the value will be reduced modulo 2n-1, just like other unsigned types. If it's signed, the result isn't guaranteed.
Note that this is a conversion, but not a cast. As defined in either C or C++, a cast is an explicit notation to cause a conversion. The conversion itself is exactly that--a conversion. Without the explicit notation (e.g., (char)i
or static_cast<char>(i)
in C++) what you have is a conversion but not a cast.
Upvotes: 5
Reputation: 97
Basically char and int data types are integer numbers with 1 byte and 2 bytes respectively.
When the compiler see an attribution from an int value to a char variable it simply truncate the value in order to fit the size of the char data type.
Upvotes: 1
Reputation: 69286
Characters in C are represented as 8-bit integers. There fore you can treat them as integers and vice versa.
// For example:
int a = 3;
char b = 'b';
a = a + b;
printf("%d", b); // prints 98 (ASCII code for 'b')
printf("%d", a); // prints 101 (3 + 98)
Upvotes: 1