Darky
Darky

Reputation: 91

using 64 bits integers in 64 bits compilers and OSes

I have a doubt about when to use 64 bits integers when targeting 64 bits OSes.

Has anyone done conclusive studies focused on the speed of the generated code?

Regards!

Upvotes: 7

Views: 208

Answers (3)

Jan Hudec
Jan Hudec

Reputation: 76386

Use int and trust the platform and compiler authors that they have done their job and chose the most efficient representation for it. On most 64-bit platforms it is 32-bits which means that it's no less efficient than 64-bit types.

Upvotes: 0

Mark B
Mark B

Reputation: 96311

You have managed to cram a ton of questions into one question here. It looks to me like all your questions basically concern micro-optimizations. As such I'm going to make a two-part answer:

  • Don't worry about size from a performance perspective but instead use types that are indicative of the data that they will contain and trust the compiler's optimizer to sort it out.

  • If performance becomes a concern at some point during development, profile your code. Then you can make algorithmic adjustments as appropriate and if the profiler shows that integer operations are causing a problem you can compare different sizes side-by-side for comparison purposes.

Upvotes: 3

Kevin A. Naudé
Kevin A. Naudé

Reputation: 4080

I agree with @MarkB but want to provide more detail on some topics.

On x64, there are more registers available (twice as many). The standard calling conventions have therefore been designed to take more parameters in registers by default. So as long as the number of parameters is not excessive (typically 4 or fewer), their types will make no difference. They will be promoted to 64 bit and passed in registers anyway.

Space will be allocated on the stack for those 64 bit registers even though they are passed in registers. This is by design to make their storage locations simple and contiguous with the those of surplus parameters. The surplus parameters will be placed on the stack regardless, so size may matter in those cases.

This issue is particularly important for memory data structures. Using 64 bit where 32 bit is sufficient will waste memory, and more importantly, occupy space in cache lines. The cache impact is not simple though. If your data access pattern is sequential, that's when you will pay for it by essentially making half of your cache unusable. (Assuming you only needed half of each 64 bit quantity.)

If your access pattern is random, there is no impact on cache performance. This is because every access occupies a full cache line anyway.

There can be a small impact in accessing integers that are smaller than word size. However, pipelining and multiple issue of instructions will make it so that the extra instruction (zero or sign extend) will almost always become completely hidden and go unobserved.

The upshot of all this is simple: choose the integer size that matters for your problem. For parameters, the compiler can promote them as needed. For memory structure, smaller is typically better.

Upvotes: 4

Related Questions