user2341104
user2341104

Reputation:

Why is int typically 32 bit on 64 bit compilers?

Why is int typically 32 bit on 64 bit compilers? When I was starting programming, I've been taught int is typically the same width as the underlying architecture. And I agree that this also makes sense, I find it logical for a unspecified width integer to be as wide as the underlying platform (unless we are talking 8 or 16 bit machines, where such a small range for int will be barely applicable).

Later on I learned int is typically 32 bit on most 64 bit platforms. So I wonder what is the reason for this. For storing data I would prefer an explicitly specified width of the data type, so this leaves generic usage for int, which doesn't offer any performance advantages, at least on my system I have the same performance for 32 and 64 bit integers. So that leaves the binary memory footprint, which would be slightly reduced, although not by a lot...

Upvotes: 48

Views: 31216

Answers (8)

David Archibald
David Archibald

Reputation: 1528

I originally wrote this up in response to this question. While I've modified it some, it's largely the same.

To get started, it is possible to have plain ints wider than 32 bits, as the C++ draft says:

 Note: Plain ints are intended to have the natural size suggested by the architecture of the execution environment; the other signed integer types are provided to meet special needs. — end note

Emphasis mine

This would ostensibly seem to say that on my 64 bit architecture (and everyone else's) a plain int should have a 64 bit size; that's a size suggested by the architecture, right? However I must assert that the natural size for even 64 bit architecture is 32 bits. The quote in the specs is mainly there for cases where 16 bit plain ints is desired--which is the minimum size the specifications allow.

The largest factor is convention, going from a 32 bit architecture with a 32 bit plain int and adapting that source for a 64 bit architecture is simply easier if you keep it 32 bits, both for the designers and their users in two different ways:

The first is that less differences across systems there are the easier is for everyone. Discrepancies between systems been only headaches for most programmer: they only serve to make it harder to run code across systems. It'll even add on to the relatively rare cases where you're not able to do it across computers with the same distribution just 32 bit and 64 bit. However, as John Kugelman pointed out, architectures have gone from a 16 bit to 32 bit plain int, going through the hassle to do so could be done again today, which ties into his next point:

The more significant component is the gap it would cause in integer sizes or a new type to be required. Because sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long) is in the actual specification, a gap is forced if the plain int is moved to 64 bits. It starts with shifting long. If a plain int is adjusted to 64 bits, the constraint that sizeof(int) <= sizeof(long) would force long to be at least 64 bits and from there there's an intrinsic gap in sizes. Since long or a plain int usually are used as a 32 bit integer and neither of them could now, we only have one more data type that could, short. Because short has a minimum of 16 bits if you simply discard that size it could become 32 bits and theoretically fill that gap, however short is intended to be optimized for space so it should be kept like that and there are use cases for small, 16 bit, integers as well. No matter how you arrange the sizes there is a loss of a width and therefore use case for an int entirely unavailable. A bigger width doesn't necessarily mean it's better.

This now would imply a requirement for the specifications to change, but even if a designer goes rogue, it's highly likely it'd be damaged or grow obsolete from the change. Designers for long lasting systems have to work with an entire base of entwined code, both their own in the system, dependencies, and user's code they'll want to run and a huge amount of work to do so without considering the repercussions is simply unwise.

As a side note, if your application is incompatible with a >32 bit integer, you can use static_assert(sizeof(int) * CHAR_BIT <= 32, "Int wider than 32 bits!");. However, who knows maybe the specifications will change and 64 bits plain ints will be implemented, so if you want to be future proof, don't do the static assert.

Upvotes: 2

Phil P
Phil P

Reputation: 843

The history, trade-offs and decisions are explained by The Open Group at http://www.unix.org/whitepapers/64bit.html. It covers the various data models, their strengths and weaknesses and the changes made to the Unix specifications to accommodate 64-bit computing.

Upvotes: 15

eispeed
eispeed

Reputation: 21

The C + + standard does not say how much memory should be used for the int type, tells you how much memory should be used at least for the type int. In many programming environments on 32-bit pointer variables, "int" and "long" are all 32 bits long.

Upvotes: 0

andre
andre

Reputation: 7249

Since no one pointed this out yet.

int is guaranteed to be between -32767 to 32767(2^16) That's required by the standard. If you want to support 64 bit numbers on all platforms I suggest using the right type long long which supports (-9223372036854775807 to 9223372036854775807).

int is allowed to be anything so long as it provides the minimum range required by the standard.

Upvotes: -1

James Kanze
James Kanze

Reputation: 153909

Bad choices on the part of the implementors?

Seriously, according to the standard, "Plain ints have the natural size suggested by the architecture of the execution environment", which does mean a 64 bit int on a 64 bit machine. One could easily argue that anything else is non-conformant. But in practice, the issues are more complex: switching from 32 bit int to 64 bit int would not allow most programs to handle large data sets or whatever (unlike the switch from 16 bits to 32); most programs are probably constrained by other considerations. And it would increase the size of the data sets, and thus reduce locality and slow the program down.

Finally (and probably most importantly), if int were 64 bits, short would have to be either 16 bits or 32 bits, and you'ld have no way of specifying the other (except with the typedefs in <stdint.h>, and the intent is that these should only be used in very exceptional circumstances). I suspect that this was the major motivation.

Upvotes: 20

fatihk
fatihk

Reputation: 7919

Main reason is backward compatibility. Moreover, there is already a 64 bit integer type long and same goes for float types: float and double. Changing the sizes of these basic types for different architectures will only introduce complexity. Moreover, 32 bit integer responds to many needs in terms of range.

Upvotes: 1

Mats Petersson
Mats Petersson

Reputation: 129364

Because there is no advantage to a lot of software to have 64-bit integers.

Using 64-bit int's to calculate things that can be calculated in a 32-bit integer (and for many purposes, values up to 4 billion (or +/- 2 billon) are sufficient), and making them bigger will not help anything.

Using a bigger integer will however have a negative effect on how many integers sized "things" fit in the cache on the processor. So making them bigger will make calculations that involve large numbers of integers (e.g. arrays) take longer because.

The int is the natural size of the machine-word isn't something stipulated by the C++ standard. In the days when most machines where 16 or 32 bit, it made sense to make it either 16 or 32 bits, because that is a very efficient size for those machines. When it comes to 64 bit machines, that no longer "helps". So staying with 32 bit int makes more sense.

Edit: Interestingly, when Microsoft moved to 64-bit, they didn't even make long 64-bit, because it would break too many things that relied on long being a 32-bit value (or more importantly, they had a bunch of things that relied on long being a 32-bit value in their API, where sometimes client software uses int and sometimes long, and they didn't want that to break).

Upvotes: 5

doron
doron

Reputation: 28882

ints have been 32 bits on most major architectures for so long that changing them to 64 bits will probably cause more problems than it solves.

Upvotes: 4

Related Questions