Matt Joiner
Matt Joiner

Reputation: 118500

Fastest integer type for common architectures

The stdint.h header lacks an int_fastest_t and uint_fastest_t to correspond with the {,u}int_fastX_t types. For instances where the width of the integer type does not matter, how does one pick the integer type that allows processing the greatest quantity of bits with the least penalty to performance? For example, if one was searching for the first set bit in a buffer using a naive approach, a loop such as this might be considered:

// return the bit offset of the first 1 bit
size_t find_first_bit_set(void const *const buf)
{
    uint_fastest_t const *p = buf; // use the fastest type for comparison to zero
    for (; *p == 0; ++p); // inc p while no bits are set
    // return offset of first bit set
    return (p - buf) * sizeof(*p) * CHAR_BIT + ffsX(*p) - 1;
}

Naturally, using char would result in more operations than int. But long long might result in more expensive operations than the overhead of using int on a 32 bit system and so on.

My current assumption is for the mainstream architectures, the use of long is the safest bet: It's 32 bit on 32 bit systems, and 64 bit on 64 bit systems.

Upvotes: 14

Views: 8450

Answers (10)

Alex Veleshko
Alex Veleshko

Reputation: 1242

Let's read what the C17 ISO standard has to say about the "fast" types of stdint.h:

7.20.1.3 Fastest minimum-width integer types

1 Each of the following types designates an integer type that is usually fastest266) to operate with among all integer types that have at least the specified width.


266)The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements.

On the most widely used x86-64 architecture almost all integer instructions run comparably fast. Therefore the compilers normally picks intX_t for int_fastX_t. Indeed, if you don't need more than X bits, why allocate more? In other words, the meant "speed" of the integer type here is its internal CPU performance.

Your case though mostly reads memory and does very little CPU processing on it. For this purpose using an integer type with the memory bus width makes more sense. Nowadays it's 64 bits, so int64_t is the type. The C standard doesn't have a type for "memory bus width" int, so if you were trying to write an architecture-independent solution, intmax_t is your best bet. Just make sure not to hit a buffer overflow.

Upvotes: 0

Thomas Padron-McCarthy
Thomas Padron-McCarthy

Reputation: 27632

I'm not sure I really understand the question, but why aren't you just using int? Quoting from my (free draft copy of the wrong, i. e. C++) standard, "Plain ints have the natural size suggested by the architecture of the execution environment."

But I think that if you want to have the optimal integer type for a certain operation, it will be different depending on which operation it is. Trying to find the first bit in a large data buffer, or finding a number in a sequence of integers, or moving them around, could very well have completely different optimal types.

EDIT:

For whatever it's worth, I did a small benchmark. On my particular system (Intel i7 920 with Linux, gcc -O3) it turns out that long ints (64 bits) are quite a bit faster than plain ints (32 bits), on this particular example. I would have guessed the opposite.

Upvotes: 4

Matt Joiner
Matt Joiner

Reputation: 118500

For all existing mainstream architectures long is the fastest type at present for loop throughput.

Upvotes: 1

Skizz
Skizz

Reputation: 71070

It is not possible to answer this question since the question is incomplete. As an analogy, consider the question:

What is the fastest vehicle

A Bugatti Veyron? Certainly fast, but no good for going from London to New York.

What is missing from the question, is the context the integer will be used in. In the original example above, I doubt you'd see much difference between 8, 32 or 64 bit values if the array is large and sparse since you'll be hitting memory bandwidth limits before cpu limits.

The main point is, the architecture does not define what size the various integer types are, it's the compiler designer that does that. The designer will carefully weigh up the pros and cons for various sizes for each type for a given architecture and pick the most appropriate.

I guess the 32 bit int on the 64 bit system was chosen because for most operations ints are used for 32 bits are enough. Since memory bandwidth is a limiting factor, saving on memory use was probably the overriding factor.

Upvotes: 0

Jens Gustedt
Jens Gustedt

Reputation: 78903

I would guess that the types size_t (for an unsigned type) and ptrdiff_t (for a signed type) will usually correspond to quite efficient integer types on any given platform.

But nothing can prove that than inspecting the produced assembler and to do benchmarks.

Edit, including the different comments, here and in other replies:

size_t and ptrdiff_t are the only typedefs that are normative in C99 and for which one may make a reasonable assumption that they are related to the architecture.

There are 5 different possible ranks for standard integer types (char, short, int, long, long long). All the forces go towards having types of width 8, 16, 32, 64 and in near future 128. As a consequence int will be stuck on 32 bit. Its definition will have nothing to do with efficiency on the platform, but just be constrained by that width requirement.

Upvotes: 1

srparish
srparish

Reputation: 1708

If you're compiling with gcc, i'd recommend using __builtin_ffs() for finding the first bit set:

Built-in Function: int __builtin_ffs (unsigned int x) Returns one plus the index of the least significant 1-bit of x, or if x is zero, returns zero.

This will be compiled into (often a single) native assembly instruction.

Upvotes: 0

R.. GitHub STOP HELPING ICE
R.. GitHub STOP HELPING ICE

Reputation: 215259

int_fast8_t is always the fastest integer type in a correct implementation. There can never be integer types smaller than 8 bits (because CHAR_BIT>=8 is required), and since int_fast8_t is the fastest integer type with at least 8 bits, it's thus the fastest integer type, period.

Upvotes: 10

Jason Williams
Jason Williams

Reputation: 57902

Theoretically, int is the best bet. It should map to the CPU's native register size, and thus be "optimal" in the sense you're asking about.

However, you may still find that an int-64 or int-128 is faster on some CPUs than an int-32, because although these are larger than the register size, they will reduce the number of iterations of your loop, and thus may work out more efficient by minimising the loop overheads and/or taking advantage of DMA to load/store the data faster.

(For example, on ARM-2 processors it took 4 memory cycles to load one 32-bit register, but only 5 cycles to load two sequentially, and 7 cycles to load 4 sequentially. The routine you suggest above would be optimised to use as many registers as you could free up (8 to 10 usually), and could therefore run up to 3 or 4 times faster by using multiple registers per loop iteration)

The only way to be sure is to write several routines and then profile them on the specific target machine to find out which produces the best performance.

Upvotes: 3

j_random_hacker
j_random_hacker

Reputation: 51226

The answer is int itself. At least in C++, where 3.9.1/2 of the standard says:

Plain ints have the natural size suggested by the architecture of the execution environment

I expect the same is true for C, though I don't have any of the standards documents.

Upvotes: 1

Jon Cage
Jon Cage

Reputation: 37458

If you want to be certain you've got the fastest implementation, why not benchmark each one on the systems you're expecting to run on instead of trying to guess?

Upvotes: 2

Related Questions