Reputation: 24038
C++ standard says only that int
has to be at least 16 bits wide. And at least according to cppreference, it's almost always either 16 or 32 bits wide:
data model int width in bits ---------------------------------- C++ standard at least 16 LP32 16 ILP32 32 LLP64 32 LP64 32
...
Other models are very rare. For example, ILP64 (8/8/8: int, long, and pointer are 64-bit) only appeared in some early 64-bit Unix systems (e.g. Unicos on Cray).
Is there an example of a currently used system with a C++ compiler where int
is over 32 bits wide? By currently used I mean e.g. some old system maybe still actively used by a specific industry because there's a valid reason to use it for that specific task and which cannot reasonably be replaced with something else. Preferably this would be something that's actively being developed/worked on, and not just a system running legacy code, which hasn't been touched in 20 years. A modern system with for example 64 bit int
, which is used for scientific computing would also be an excellent answer.
I am not looking for a system that was used 2 years in the 90s and then dumped completely. I'm also not looking for something which is only used as a hobby to play around, or some old system, which two companies in the world use just because they are too cheap to upgrade.
Upvotes: 28
Views: 1672
Reputation: 1528
Please note that this answer is intended as a frame challenge; that even 64 operating systems wouldn't normally want >32 bits due to several points. Which means it is unlikely a team would go through the effort of creating an operating system without already having taken into consideration these points and even less likely that it'd be non-obsolete by this point in time. I hope a more direct answer is found, but I think that this justifies at least the major operating system's decisions.
To get started, you are correct that the C++ draft permits for plain ints that are permitted to be wider than 32 bits. To quote:
Note: Plain ints are intended to have the natural size suggested by the architecture of the execution environment; the other signed integer types are provided to meet special needs. — end note
Emphasis mine
This would ostensibly seem to say that on my 64 bit architecture (and everyone else's) a plain int should have a 64 bit size; that's a size suggested by the architecture, right? However I must assert that the natural size for even 64 bit architecture is 32 bits. The quote in the specs is mainly there for cases where 16 bit plain ints is desired.
Convention is a powerful factor, going from a 32 bit architecture with a 32 bit plain int and adapting that source for a 64 bit architecture is simply easier if you keep it 32 bits, both for the designers and their users in two different ways:
The first is that less differences across systems there are the easier is for everyone. Discrepancies between systems been only headaches for most programmer: they only serve to make it harder to run code across systems. It'll even add on to the relatively rare cases where you're not able to do it across computers with the same distribution just 32 bit and 64 bit. However, as John Kugelman pointed out, architectures have gone from a 16 bit to 32 bit plain int, going through the hassle to do so could be done again today, which ties into his next point:
The more significant component is the gap it would cause in integer sizes or a new type to be required. Because sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)
is in the actual specification, a gap is forced if int is moved to 64 bits, a gap is simply inevitable. It starts with shifting long
. If a plain int is adjusted to 64 bits, the constraint that sizeof(int) <= sizeof(long)
would force long
to be at least 64 bits and from there there's an intrinsic gap in sizes. Since long
or a plain int usually are used as a 32 bit integer and neither of them could now, we only have one more data type that could, short
. Because short
has a minimum of 16 bits if you simply discard that size it could become 32 bits and fill that gap. However short
is intended to be optimized for space so it should be kept like that and there are use cases for small, 16 bit, integers as well. No matter how you arrange the sizes there is a loss of a width and therefore use case for an int entirely unavailable.
This now would imply a requirement for the specifications to change, but even if a designer goes rogue, it's highly likely it'd be damaged or grow obsolete from the change. Designers for long lasting systems have to work with an entire base of entwined code, both their own in the system, dependencies, and user's code they'll want to run and a huge amount of work to do so without considering the repercussions is simply unwise.
As a side note, if your application is incompatible with a >32 bit integer, you can use static_assert(sizeof(int) * CHAR_BIT <= 32, "Int wider than 32 bits!");
. However, who knows maybe the specifications will change and 64 bits plain ints will be implemented, so if you want to be future proof, don't do the static assert.
Upvotes: 15
Reputation: 8809
I still think this is an opinionated question. Though Univac is by no means common there are still working examples on display such as the Univac 9400 in the technikum29 living computer museum near Frankfurt in Germany. People are still maintaining that in working order.
"The New C Standard (Excerpted material)" dated 2002-2008 says:
Common Implementations
The values that are most often greater than the ones shown next are those that apply to the type int. On hosted implementations they are often the same as the corresponding values for the type long. On a freestanding implementation the processors’ efficiency issues usually dictate the use of smaller numeric ranges, so the minimum values shown here are usually used. The values used for the corresponding character, short, long, and long long types are usually the same as the ones given in the standard.
The Unisys A Series[5] is unusual in not only using sign magnitude, but having a single size (six bytes) for all non-character integer types (the type long long is not yet supported by this vendor’s implementation).
#define SHRT_MIN (-549755813887)
#define SHRT_MAX 549755813887
#define USHRT_MAX 549755813887U
#define INT_MIN (-549755813887)
#define INT_MAX 549755813887
#define UINT_MAX 549755813887U
#define LONG_MIN (-549755813887L)
#define LONG_MAX 549755813887L
#define ULONG_MAX 549755813887UL
The character type use two’s complement notation and occupies a single byte.
The C compiler for the Unisys e-@ction Application Development Solutions (formerly known as the Universal Compiling System, UCS)[6] has 9-bit character types — 18-bit short, 36-bit int and long, and 72-bit long long.
REF: http://c0x.coding-guidelines.com/5.2.4.2.1.pdf
Upvotes: 2