Reputation: 1420
I know that the number of bits in a char
is defined in CHAR_BIT
from <limits.h>
and that sizeof(char)
is always 1. But what about other basic datatypes, are their sizes defined relative to CHAR_BIT
?
For example the minimum size of an int
in C is defined as 16 bits on Wikipedia, but on other sites like GeeksforGeeks the minimum size is defined as 2 bytes. Which definition is correct as a byte is not necessarily the same as 8 bits.
Upvotes: 0
Views: 483
Reputation: 780688
The minimum required range of values of a type is defined numerically, not in terms of bits or bytes. 5.2.4.2.1 Sizes of integer types <limits.h>
says has definitions like:
- minimum value for an object of type
short int
SHRT_MIN
-32767 // -(215 - 1)- maximum value for an object of type
short int
SHRT_MAX
+32767 // 215 - 1
All the type definitions happen to match up with implementing the types using either two's complement or sign-magnitude representation with a multiple of 8 bit bytes. But since they're just minimums, it doesn't require such representations.
Upvotes: 4
Reputation: 153338
Is the minimum size of datatypes defined in bits or bytes?
Mostly it is defined by a combination of C specification properties and CHAR_BIT
. CHAR_BIT >= 8
.
int
has a minimum range [-32767... 32767] obliging al least 16 bits to encode that.
With the common CHAR_BIT == 8
, that is 16-bits or 2 byte".
With the common CHAR_BIT == 16
, that is 16-bits or 1 "byte". A byte being 16-bits in this implementation.
With the common CHAR_BIT == 64
, that is 64-bits or 1 "byte". 64 bits, as no type is smaller than char
.
Exact size integer types like (u)intN_t
are effectively defined by bit size. They too have min/max range values, but with the no padding and 2's complement requirement are effectively defined by bits. Note: these are optional types.
Some esoteric thoughts about the minnum floating point bit size.
Upvotes: 3