Reputation: 669
I just started learning C and am rather confused over declaring characters using int and char.
I am well aware that any characters are made up of integers in the sense that the "integers" of characters are the characters' respective ASCII decimals.
That said, I learned that it's perfectly possible to declare a character using int
without using the ASCII decimals. Eg. declaring variable test
as a character 'X'
can be written as:
char test = 'X';
and
int test = 'X';
And for both declaration of character, the conversion characters are %c
(even though test is defined as int
).
Therefore, my question is/are the difference(s) between declaring character variables using char
and int
and when to use int
to declare a character variable?
Upvotes: 45
Views: 68950
Reputation: 149185
The difference is the size in byte of the variable, and from there the different values the variable can hold.
A char is required to accept all values between 0 and 127 (included). So in common environments it occupies exactly one byte (8 bits). It is unspecified by the standard whether it is signed (-128 - 127) or unsigned (0 - 255).
An int is required to be at least a 16 bits signed word, and to accept all values between -32767 and 32767. That means that an int can accept all values from a char, be the latter signed or unsigned.
If you want to store only characters in a variable, you should declare it as char
. Using an int
would just waste memory, and could mislead a future reader. One common exception to that rule is when you want to process a wider value for special conditions. For example the function fgetc
from the standard library is declared as returning int
:
int fgetc(FILE *fd);
because the special value EOF
(for End Of File) is defined as the int
value -1 (all bits to one in a 2-complement system) that means more than the size of a char. That way no char (only 8 bits on a common system) can be equal to the EOF constant. If the function was declared to return a simple char
, nothing could distinguish the EOF value from the (valid) char 0xFF.
That's the reason why the following code is bad and should never be used:
char c; // a terrible memory saving...
...
while ((c = fgetc(stdin)) != EOF) { // NEVER WRITE THAT!!!
...
}
Inside the loop, a char would be enough, but for the test not to succeed when reading character 0xFF, the variable needs to be an int.
Upvotes: 62
Reputation: 5487
The char
type has multiple roles.
The first is that it is simply part of the chain of integer types, char
, short
, int
, long
, etc., so it's just another container for numbers.
The second is that its underlying storage is the smallest unit, and all other objects have a size that is a multiple of the size of char
(sizeof
returns a number that is in units of char
, so sizeof char == 1
).
The third is that it plays the role of a character in a string, certainly historically. When seen like this, the value of a char
maps to a specified character, for instance via the ASCII encoding, but it can also be used with multi-byte encodings (one or more char
s together map to one character).
Upvotes: 10
Reputation: 2637
Size of an int
is 4 bytes on most architectures, while the size of a char
is 1 byte.
Upvotes: 4
Reputation: 9631
I think there's no difference, but you're allocating extra memory you're not going to use. You can also do const long a = 1;
, but it will be more suitable to use const char a = 1;
instead.
Upvotes: 2
Reputation: 1168
Usually you should declare characters as char and use int for integers being capable of holding bigger values. On most systems a char occupies a byte which is 8 bits. Depending on your system this char might be signed or unsigned by default, as such it will be able to hold values between 0-255 or -128-127.
An int might be 32 bits long, but if you really want exactly 32 bits for your integer you should declare it as int32_t or uint32_t instead.
Upvotes: 3