Reputation: 61
I am studying how the C language works. I can find definitions for types like int8_t
, intptr_t
, etc in <stdlib.h>
:
// Represents true-or-false values
typedef _Bool bool;
enum { false, true };
// Explicitly-sized versions of integer types
typedef __signed char int8_t;
typedef unsigned char uint8_t;
typedef short int16_t;
typedef unsigned short uint16_t;
typedef int int32_t;
typedef unsigned int uint32_t;
typedef long long int64_t;
typedef unsigned long long uint64_t;
// Pointers and addresses are 32 bits long.
// We use pointer types to represent virtual addresses,
// uintptr_t to represent the numerical values of virtual addresses,
// and physaddr_t to represent physical addresses.
typedef int32_t intptr_t;
typedef uint32_t uintptr_t;
typedef uint32_t physaddr_t;
However, I can't find the definition of type like char
. Thus my question is, where are int
and char
defined?
Upvotes: 4
Views: 2614
Reputation: 123598
Note that those "definitions" of int8_t
, intptr_t
, etc., are simply aliases for built-in types.
The basic data types char
, int
, long
, double
, etc., are all defined internally to the compiler - they're not defined in any header file. Their minimum ranges are specified in the language standard (a non-official, pre-publication draft is available here).
The header file <limits.h>
will show the ranges for different integer types for the particular implemention; here's an excerpt from the implementation I'm using:
/* Number of bits in a `char'. */
# define CHAR_BIT 8
/* Minimum and maximum values a `signed char' can hold. */
# define SCHAR_MIN (-128)
# define SCHAR_MAX 127
/* Maximum value an `unsigned char' can hold. (Minimum is 0.) */
# define UCHAR_MAX 255
/* Minimum and maximum values a `char' can hold. */
# ifdef __CHAR_UNSIGNED__
# define CHAR_MIN 0
# define CHAR_MAX UCHAR_MAX
# else
# define CHAR_MIN SCHAR_MIN
# define CHAR_MAX SCHAR_MAX
# endif
/* Minimum and maximum values a `signed short int' can hold. */
# define SHRT_MIN (-32768)
# define SHRT_MAX 32767
/* Maximum value an `unsigned short int' can hold. (Minimum is 0.) */
# define USHRT_MAX 65535
/* Minimum and maximum values a `signed int' can hold. */
# define INT_MIN (-INT_MAX - 1)
# define INT_MAX 2147483647
/* Maximum value an `unsigned int' can hold. (Minimum is 0.) */
# define UINT_MAX 4294967295U
/* Minimum and maximum values a `signed long int' can hold. */
# if __WORDSIZE == 64
# define LONG_MAX 9223372036854775807L
# else
# define LONG_MAX 2147483647L
# endif
Again, this doesn't define the types for the compiler, this is just informational; you can use these macros to guard against overflow, for example. There's a <float.h>
header that does something similar for floating-point types.
The char
type must be able to represent at least every value in the basic execution character set - upper and lower case Latin alphabet, all decimal digits, common punctuation characters, and control characters (newline, form feed, carriage return, tab, etc.). char
must be at least 8 bits wide, but may be wider on some systems. There's some weirdness regarding the signedness of char
- the members of the basic execution character set are guaranteed to be non-negative ([0...127]
), but additional characters may have positive or negative values, so "plain" char
may have the same range as either signed char
or unsigned char
. It depends on the implementation.
The int
type must be able to represent values in at least the range [-32767...32767]
. The exact range is left up to the implementation, depending on word size and signed integer representation.
C is a product of the early 1970s, and at the time there was a lot of variety in byte and word sizes - historically, bytes could be anywhere from 7 to 9 bits wide, words could be 16 to 18 bits wide, etc. Powers of two are convenient, but not magical. Similarly, there are multiple representations for signed integers (2's complement, 1's complement, sign magnitude, etc.). So the language definition specifies the minimum requirements, and it's up to the implementor to map those onto the target platform.
Upvotes: 1
Reputation: 24788
In which header file is char
defined?
Nowhere - char
is a builtin type, not a user-defined one. It's part of the core language. The same applies to int
.
Is there no definition at all of these builtin types?
There is. The standard does defines these builtin types.
Note that both char
and int
are also keywords, which means you can't use them as identifiers, because they have a reserved and already assigned use in the language.
Upvotes: 8
Reputation: 68034
Thus, here is my question, where is int, char defined?
They are defined here: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
Page 57+ in reader or 39+ in the document
Upvotes: 1
Reputation: 225737
The char
and int
types, among others, are not defined in any header file. They are built in types, meaning they are part of the core language. Their definitions are hardcoded into the compiler itself.
As to how the compiler defines what those types are, that is dictated by the C standard.
The definition of int
and char
can be found in section 6.2.5 (Types). For example, the definition of char
:
3 An object declared as type
char
is large enough to store any member of the basic execution character set. If a member of the basic execution character set is stored in achar
object, its value is guaranteed to be nonnegative. If any other character is stored in achar
object, the resulting value is implementation-defined but shall be within the range of values that can be represented in that type.
Definitions for the other types, as well as the minimum range of values for each type, follow.
Upvotes: 5