Reputation: 4848
The upcoming C23 Standard adds a keyword _BitInt()
which can be used, as I understand, to define an integer with a specific number of bits. However I could not find much information with regards to the in-memory representation of types declared this way, and any behavior that relates to their in-memory representation such as their size or alignment.
As such, is there any difference in terms of behavior, representation, or alignment requirements between _BitInt()
types and 'real' integer types of the same bit width? For example, between _BitInt(32)
and int32_t
or int_least32_t
? And is it well-defined to type-pun between them?
Upvotes: 4
Views: 2406
Reputation: 281748
One important behavior difference is that _BitInt
types are exempt from the integer promotions. Adding two int16_t
values produces an int, while adding two _BitInt(16)
values produces a _BitInt(16)
. Multiplying two unsigned _BitInt(16)
values produces an unsigned _BitInt(16)
, while multiplying two uint16_t
values on a platform with 32-bit ints produces an int
(and possible signed overflow).
Upvotes: 3