Reputation: 846
As I understand it, the C specification says that type int
is supposed to be the most efficient type on target platform that contains at least 16 bits.
Isn't that exactly what the C99 definition of int_fast16_t
is too?
Maybe they put it in there just for consistency, since the other int_fastXX_t
are needed?
Update
To summarize discussion below:
Example: MSVC on x86-64 has a 32-bit int, even on 64-bit systems. MS chose to do this because too many people assumed int would always be exactly 32-bits, and so a lot of ABIs would break. However, it's possible that int_fast32_t would be a 64-bit number if 64-bit values were faster on x86-64. (Which I don't think is actually the case, but it just demonstrates the point)
Upvotes: 45
Views: 15617
Reputation: 15134
An example of how the two types might be different: suppose there’s an architecture where 8-bit, 16-bit, 32-bit and 64-bit arithmetic are equally fast. (The i386 comes close.) Then, the implementer might use a LLP64 model, or better yet allow the programmer to choose between ILP64, LP64 and LLP64, since there’s a lot of code out there that assumes long is exactly 32 bits, and that sizeof(int) <= sizeof(void*) <= sizeof(long)
. Any 64-bit implementation must violate at least one of these assumptions.
In that case, int
would probably be 32 bits wide, because that will break the least code from other systems, but uint_fast16_t
could still be 16 bits wide, saving space.
Upvotes: 1
Reputation: 153407
int
is a "most efficient type" in speed/size - but that is not specified by per the C spec. It must be 16 or more bits.
int_fast16_t
is most efficient type in speed with at least the range of a 16 bit int.
Example: A given platform may have decided that int
should be 32-bit for many reasons, not only speed. The same system may find a different type is fastest for 16-bit integers.
Example: In a 64-bit machine, where one would expect to have int
as 64-bit, a compiler may use a mode with 32-bit int
compilation for compatibility. In this mode, int_fast16_t
could be 64-bit as that is natively the fastest width for it avoids alignment issues, etc.
Upvotes: 37
Reputation: 180500
int_fast16_t
is guaranteed to be the fastest int with a size of at least 16 bits. int
has no guarantee of its size except that:
sizeof(char) = 1 and sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long).
And that it can hold the range of -32767 to +32767.
(7.20.1.3p2) "The typedef name
int_fastN_t
designates the fastest signed integer type with a width of at least N. The typedef nameuint_fastN_t
designates the fastest unsigned integer type with a width of at least N."
Upvotes: 30
Reputation: 37914
From the C99 rationale 7.8
Format conversion of integer types <inttypes.h>
(document that accompanies with Standard), emphasis mine:
C89 specifies that the language should support four signed and unsigned integer data types,
char
,short
,int
andlong
, but places very little requirement on their size other than thatint
andshort
be at least 16 bits andlong
be at least as long asint
and not smaller than 32 bits. For 16-bit systems, most implementations assign 8, 16, 16 and 32 bits tochar
,short
,int
, andlong
, respectively. For 32-bit systems, the common practice is to assign 8, 16, 32 and 32 bits to these types. This difference inint
size can create some problems for users who migrate from one system to another which assigns different sizes to integer types, because Standard C’s integer promotion rule can produce silent changes unexpectedly. The need for defining an extended integer type increased with the introduction of 64-bit systems.The purpose of
<inttypes.h>
is to provide a set of integer types whose definitions are consistent across machines and independent of operating systems and other implementation idiosyncrasies. It defines, viatypedef
, integer types of various sizes. Implementations are free totypedef
them as Standard C integer types or extensions that they support. Consistent use of this header will greatly increase the portability of a user’s program across platforms.
The main difference between int
and int_fast16_t
is that the latter is likely to be free of these "implementation idiosyncrasies". You may think of it as something like:
I don't care about current OS/implementation "politics" of int
size. Just give me whatever the fastest signed integer type with at least 16 bits is.
Upvotes: 2
Reputation: 81149
On some platforms, using 16-bit values may be much slower than using 32-bit values [e.g. an 8-bit or 16-bit store would require performing a 32-bit load, modifying the loaded value, and writing back the result]. Even if one could fit twice as many 16-bit values in a cache as 32-bit values (the normal situation where 16-bit values would be faster than 32-bit values on 32-bit systems), the need to have every write preceded by a read would negate any speed advantage that could produce unless a data structure was read far more often than it was written. On such platforms, a type like int_fast16_t
would likely be 32 bits.
That having been said, the Standard does not unfortunately allow what would be the most helpful semantics for a compiler, which would be to allow variables of type int_fast16_t
whose address is not taken to arbitrarily behave as 16-bit types or larger types, depending upon what is convenient. Consider, for example, the method:
int32_t blah(int32_t x)
{
int_fast16_t y = x;
return y;
}
On many platforms, 16-bit integers stored in memory can often be manipulated just as those stored in registers, but there are no instructions to perform 16-bit operations on registers. If an int_fast16_t
variable stored in memory are only capable of holding -32768 to +32767, that same restriction would apply to int_fast16_t
variables stored in registers. Since coercing oversized values into signed integer types too small to hold them is implementation-defined behavior, that would compel the above code to add instructions to sign-extend the lower 16 bits of x
before returning it; if the Standard allowed for such a type, a flexible "at least 16 bits, but more if convenient" type could eliminate the need for such instructions.
Upvotes: 1
Reputation: 263237
As I understand it, the C specification says that type
int
is supposed to be the most efficient type on target platform that contains at least 16 bits.
Here's what the standard actually says about int
: (N1570 draft, section 6.2.5, paragraph 5):
A "plain"
int
object has the natural size suggested by the architecture of the execution environment (large enough to contain any value in the rangeINT_MIN
toINT_MAX
as defined in the header<limits.h>
).
The reference to INT_MIN
and INT_MAX
is perhaps slightly misleading; those values are chosen based on the characteristics of type int
, not the other way around.
And the phrase "the natural size" is also slightly misleading. Depending on the target architecture, there may not be just one "natural" size for an integer type.
Elsewhere, the standard says that INT_MIN
must be at most -32767
, and INT_MAX
must be at least +32767
, which implies that int
is at least 16 bits.
Here's what the standard says about int_fast16_t
(7.20.1.3):
Each of the following types designates an integer type that is usually fastest to operate with among all integer types that have at least the specified width.
with a footnote:
The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements.
The requirements for int
and int_fast16_t
are similar but not identical -- and they're similarly vague.
In practice, the size of int
is often chosen based on criteria other than "the natural size" -- or that phrase is interpreted for convenience. Often the size of int
for a new architecture is chosen to match the size for an existing architecture, to minimize the difficulty of porting code. And there's a fairly strong motivation to make int
no wider than 32 bits, so that the types char
, short
, and int
can cover sizes of 8, 16, and 32 bits. On 64-bit systems, particularly x86-64, the "natural" size is probably 64 bits, but most C compilers make int
32 bits rather than 64 (and some compilers even make long
just 32 bits).
The choice of the underlying type for int_fast16_t
is, I suspect, less dependent on such considerations, since any code that uses it is explicitly asking for a fast 16-bit signed integer type. A lot of existing code makes assumptions about the characteristics of int
that go beyond what the standard guarantees, and compiler developers have to cater to such code if they want their compilers to be used.
Upvotes: 8
Reputation: 121387
The difference is that the fast types are allowed to be wider than their counterparts (without fast) for efficiency/optimization purposes. But the C standard by no means guarantees they are actually faster.
C11, 7.20.1.3 Fastest minimum-width integer types
1 Each of the following types designates an integer type that is usually fastest 262) to operate with among all integer types that have at least the specified width.
2 The typedef name int_fastN_t designates the fastest signed integer type with a width of at least N. The typedef name uint_fastN_t designates the fastest unsigned integer type with a width of at least N.
262) The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements.
Another difference is that fast and least types are required types whereas other exact width types are optional:
3 The following types are required: int_fast8_t int_fast16_t int_fast32_t int_fast64_t uint_fast8_t uint_fast16_t uint_fast32_t uint_fast64_t All other types of this form are optional.
Upvotes: 2