Igor Ševo
Igor Ševo

Reputation: 5515

Primitive type performance

I am wondering about the performance of different primitive types, primarily in C#. Now, I realize this is not strictly a language related concept, since the machine is optimized for handling types.

I have read the following two questions:

Nevertheless, I need a few clarifications.

I know that on a 32-bit machine an int is faster than both short and byte, since int is native for the platform. However, what happens on 64-bit systems? Is it better, performance wise, to use a long instead of an int?

Also, what happens with floating point types? Is double better than float?

The answer may or may not be language specific. I assume there aren't many differences for different languages regarding this issue. However, if there are, it would be nice to have an explanation of why.

Upvotes: 4

Views: 433

Answers (1)

Tim B
Tim B

Reputation: 41178

Actually in most cases you will get the same performance in anything smaller than your machines architecture (i.e. int, short, byte on 32 bit) as internally the code will just use a 32 bit value to process them.

The same applies on 64 bit systems, there is no reason not to use an int (as they will run at the same speed as a long on 64 bit, faster than a long on 32 bit) unless you need the extra range. If you think about it - 64 bit systems had to run 32 bit code fast as otherwise no-one would have made the transition.

The smaller size is only used when packing multiple copies of the primitive into structures such as arrays. There you do get a small slowdown from unpacking them, but in most cases that will be more than compensated for by things such as better cache/memory coherence.

Upvotes: 4

Related Questions