BangOperator
BangOperator

Reputation: 4447

Should I use Int32 for small number instead of Int or Int64 in 64 bit architecture

The iOS app I am working on supports only 64-bit devices. When creating new Int type in swift and considering the fact that the range that I want to store will never overflow Int32, I wonder if there is any benefit of using Int32 instead or Int/Int64.

Upvotes: 3

Views: 1328

Answers (1)

Rob Napier
Rob Napier

Reputation: 299633

No, use Int. The Swift Programming Language is quite explicit about this:

Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability. Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.

By "work with a specific size of integer," the documentation is describing situations such as file formats and networking protocols that are defined in terms of specific bit-widths. Even if you're only counting to 10, you should still store it in an Int.

Int types do not automatically convert, so if you have an Int32, and a function requires an Int, you'd have to convert it as Int(x). This gets very cumbersome very quickly. To avoid that, Swift strongly recommends everything be an Int unless you have a specific reason to do otherwise.

You should also avoid UInt, even if your value is unsigned. You should only use UInt when you mean "this machine-word-sized bit pattern" and you should only use the sized UInts (UInt32, etc) when you mean "this bit-width bit pattern." If you mean "a number" (even an unsigned number), you should use Int.

Use UInt only when you specifically need an unsigned integer type with the same size as the platform’s native word size. If this isn’t the case, Int is preferred, even when the values to be stored are known to be nonnegative. A consistent use of Int for integer values aids code interoperability, avoids the need to convert between different number types, and matches integer type inference, as described in Type Safety and Type Inference.


See Peter's comments below for some links to further discussion on performance. It is very true that using 32-bit integers can be a significant performance improvement when working with large data structures, particularly because of caching and locality issues. But as a rule, this should be hidden within a data type that manages that extra complexity, isolating performance-critical code from the main system. Shifting back and forth between 32- and 64-bit integers can easily overwhelm the advantages of smaller data if you're not careful.

So as a rule, use Int. There are advantages to using Int32 in some cases, but trying to use it as a default is as likely to hurt performance as help it, and will definitely increase code complexity dramatically.

Upvotes: 4

Related Questions