Mattia Dinosaur
Mattia Dinosaur

Reputation: 920

who define the integer type range in c language

I learned that in x86_64 platform, the int range is from -2,147,483,648 to 2,147,483,647.

In c99, the min value of int max is +32767:

Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign. Maximum value for an object of type int INT_MAX +32767

In glibc, it references POSIX specification:

{INT_MAX} Maximum value for an object of type int.
[CX] [Option Start] Minimum Acceptable Value: 2 147 483 647


my question

There are multiple documents defining the range of int, they do not contradict each other, but who really defines the final int max value on a specific platform like x86_64?

I guess it is one of the following, but I do not know which one:

Upvotes: 2

Views: 184

Answers (4)

Daft Soft
Daft Soft

Reputation: 72

This is so called "data model" which is defined by ABI which is realized by compiler and of course all libraries (including glibc which you mentioned) comply with it.
The notation is "C" - for char, "S" - for short, "I" - for int, "L" - for long and "P" - for pointer.
So, 32bit ABI data model is C8S16I32L32P32. To get short and more nice-looking string we skip prefixes notating standard (more like - common or traditional, because C standard doesn't define exact sizes for types) sizes and move the size to the end. So we get ILP32, what means that int, long and pointer are 32 bits long (while char is left 8-bit and short is 16-bit long).
For x86_64 ABI it is LP64 (I suppose you already can form full notation of this data model and see what size is int on this ABI).
And yes, Windows world is different - it's 64-bit data model is LLP64. That means that 64-bit long are only long long and pointer types (long is still 32-bit long).

Upvotes: 1

ikegami
ikegami

Reputation: 386361

As far as C is concerned, it's up the compiler.


Of the constants in limits.h, the C17 standard says

5.2.4.2.1.1 [...] Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign. [...]

The term "implementation" is used throughout to refer to the C compiler and associated libraries.

5. An implementation translates C source files and executes C programs in two data-processing-system environments, which will be called the translation environment and the execution environment in this International Standard. Their characteristics define and constrain the results of executing conforming C programs constructed according to the syntactic and semantic rules for conforming implementations.

It is not directly based on the hardware, as your question presumes. For example, Microsoft's C compiler for Windows for the x86-64 uses a 32-bit long, but my gcc has a 64-bit long for Linux on the same hardware.

Upvotes: 3

KamilCuk
KamilCuk

Reputation: 141493

who really define the final int max value in specific platform like x86_64?

The compiler defines the final int max value.

The value is not necessary specific to platform, you can compile code for x86_84 architecture using a compiler with an int with 16-bits, or with 64-bits, or any bits. C language is an abstraction, the compiler translates that abstraction to machine code.

Upvotes: 1

nneonneo
nneonneo

Reputation: 179552

The size of integer types is determined, on many platforms, by the target ABI (Application Binary Interface). The ABI is defined usually by the operating system and/or compiler used. Programs compiled with the same ABI can often interoperate, even if they are compiled by different compilers.

For example, on many Intel-based Linux and UNIX machines, the System V ABI determines the sizes of the fundamental C types per processor; there are different definitions for x86 and x86-64, for example, but they will be consistent across all systems that use that ABI. Compilers targeting Intel Linux machines will typically use the System V ABI, so you get the same int no matter which compiler you use. However, Microsoft operating systems will use a different ABI (actually, several, depending on how you look), which defines the fundamental types differently.

Modern desktop systems almost always provide 4-byte ints. However, embedded systems will often provide smaller ints; the Arduino AVR platform, for example, defines int as a 16-bit type. This is again dependent on the compiler and processor (no OS in this case).

So, the short answer is that "it depends". Your compiler (in some specific configuration) will ultimately be responsible for translating int into a machine type, so in some sense your compiler is the ultimate source of truth. But the compiler's decision might be informed by an existing ABI standard, by the standards of the OS, the processor, or just existing convention.

Upvotes: 2

Related Questions