Reputation: 53
I know the standard says if the integer literal does not fit the int, it tries unsigned int, and so forth, per section 2.14.2 Table 6 in the standard.
My question is: what's the criteria to determine it fits or not?
Why do both std::is_signed<decltype(0xFFFFFFFF)>::value
std::is_signed<decltype(0x80000000)>::value
gives false
. Why don't they fit in int? 0x80000000
has the same bit representation as signed -1
signed -2147483648
.
Upvotes: 2
Views: 181
Reputation: 29975
0xFFFFFFFF is hex for 4'294'967'295. On platforms where sizeof(int) == 4
, the range of int
is -2'147'483'648 to 2'147'483'647. As you can see 4'294'967'295 isn't in that range. Simple as that.
Upvotes: 3
Reputation: 26066
what's the criteria to determine it fits or not?
Up to the platform and compiler. They define how big int
is.
Why do both
std::is_signed<decltype(0xFFFFFFFF)>::value
std::is_signed<decltype(0x80000000)>::value
gives false.
Because in most platforms 0x80000000
and 0xFFFFFFFF
will be unsigned int
because they do not fit in an int
.
Why don't they fit in int?
Because in most platforms int
is 32-bit, two's complement, which means 0x7FFFFFFF
is the biggest int
Upvotes: 3
Reputation: 96116
You don't need to look at "bit representation" to check if the number fits or not.
Assuming sizeof(int) == 4
, int
can represent numbers from -231 to 231-1 inclusive.
0x80000000
is 231, which is 1 larger than the maximum value.
Upvotes: 3