Reputation: 21444
In C standard is written that, i.e., an int
should have:
but in implementation, i.e. on a 16 bit machine that values are:
Why do that difference in negative values?
Upvotes: 3
Views: 331
Reputation: 67148
Both ranges are valid, what is unusual is the first one [-32767...32767] but it is perfectly right according to C standard.
In many implementations minimum value for a short integer is -(2^15 - 1)
then -32768
. You'll see it defined as:
#define SHRT_MIN 0x7FFF
#define SHRT_MIN (-32767 - 1)
#define SHRT_MIN (-32768)
Standard asserts that it must be (-2^15 + 1)
(then -32767
) or less (because the actual value depends on the particular system and library implementation). Because most implementations use two's complement to represent negative numbers (when zero is unsigned for integers) then minimum negative value is one unit less. In practice it means that you can be sure that when you write your program (regardless of compiler and platform) at least you can store -32767
in a short
(but for some compilers/platforms range may be wider as you saw in your compiler).
Please note that in this case -32768
makes sense (for platforms where sizeof(int) > sizeof(short)
), not like with MIN_INT
, because literal value is actually an int
(not a short
).
Upvotes: 5