Reputation: 495
I found the following definitions in /usr/include/limits.h:
# define INT_MIN (-INT_MAX - 1)
# define INT_MAX 2147483647
Also, it seems that all XXX_MAX's in this header file are explicitly defined from a numerical constant.
I wonder if there is a portable way (against different word sizes across platforms) to define a INT_MAX ?
I tried the following:
~((int)-1)
But this seems incorrect.
A short explanation is also highly regarded.
Upvotes: 11
Views: 17927
Reputation: 3138
No, because there is no portable way to know the difference in the number of value bits between int
and unsigned
.
There is a portable way to obtain UINT_MAX
, that is -1u
because unsigned integers are modulo types. Hence, the expression of INT_MAX
is
(int)(UINT_MAX >> (value_bits<unsigned> - value_bits<int>))
Unfortunately there is no way to get value_bits<unsigned> - value_bits<int>
.
In C++ this seems to be possible by template meta-programming, recursing from 15 bits on. (The range [-32767, 32767] is guaranteed to be representable in int
.)
template<int bits>
struct max0
{
static const int value = max0<bits - 1>::value * 2 + 1;
};
template<>
struct max0<15>
{
static const int value = 32767;
};
template<int bits>
struct max1
{
static const int value =
max0<bits + 1>::value > max0<bits>::value ?
max1<bits + 1>::value :
max0<bits>::value;
};
#define INT_MAX (max1<15>::value)
But I find this an overkill and stick with compiler-defined __INT_MAX__
;(
EDIT: Oops!
max.cpp: In instantiation of 'const int max0<32>::value':
max.cpp:17:51: recursively required from 'const int max1<16>::value'
max.cpp:17:51: required from 'const int max1<15>::value'
max.cpp:28:28: required from here
max.cpp:4:52: warning: integer overflow in expression [-Woverflow]
static const int value = max0<bits - 1>::value * 2 + 1;
~~~~~~~~~~~~~~~~~~~~~~^~~
max.cpp:4:22: error: overflow in constant expression [-fpermissive]
static const int value = max0<bits - 1>::value * 2 + 1;
^~~~~
max.cpp:4:22: error: overflow in constant expression [-fpermissive]
max.cpp:4:22: error: overflow in constant expression [-fpermissive]
max.cpp: In instantiation of 'const int max0<914>::value':
max.cpp:17:51: recursively required from 'const int max1<16>::value'
max.cpp:17:51: required from 'const int max1<15>::value'
max.cpp:28:28: required from here
max.cpp:4:52: fatal error: template instantiation depth exceeds maximum of 900 (use -ftemplate-depth= to increase the maximum)
static const int value = max0<bits - 1>::value * 2 + 1;
~~~~~~~~~~~~~~~~~~~~~~^~~
compilation terminated.
Upvotes: 2
Reputation: 64702
I like the defintions:
#define INT_MIN (1 << (sizeof(int)*CHAR_BIT-1))
#define INT_MAX (-(INT_MIN+1))
Upvotes: 2
Reputation: 320631
Well, one can try
#define INT_MAX (int) ((unsigned) -1 / 2)
which "should" work across platforms with different word size and even with different representations of signed integer values. (unsigned) -1
will portably produce the UINT_MAX
value, which is the all-bits-one pattern. Divided it by 2 it should become the expected max value for the corresponding signed integer type, which spends on bit for representing the sign.
But why? The standard header files and definitions made in them are not supposed to be portable.
BTW, the definition of INT_MIN
you quoted above is not portable. It is specific to 2's complement representation of signed integers.
Upvotes: 0
Reputation: 215367
For the INT_MAX
in the standard header limits.h
, the implementor's hands are tied by the fact that it's required to be usable in preprocessor #if
directives. This rules out anything involving sizeof
or casts.
If you just want a version that works in actual C expressions, perhaps this would work:
(int)-1U/2 == (int)(-1U/2) ? (int)-1U : (int)(-1U/2)
The concept here is that int
may have the same number of value bits as unsigned
, or one fewer value bit; the C standard allows either. In order to test which it is, check the result of the conversion (int)-1U
. If -1U
fits in int
, its value must be unchanged by the cast, so the equality will be true. If -1U
does not fit in int
, then the cast results in an implementation-defined result of type int
. No matter what the value is, though, the equality will be false merely by the range of possible values.
Note that, technically, the conversion to int
could result in an implementation-defined signal being raised, rather than an implementation-defined value being obtained, but this is not going to happen when you're dealing with a constant expression which will be evaluated at compile-time.
Upvotes: 10
Reputation: 154280
If we assume 1 or 2's compliment notation and 8 bit/byte and no padding:
#define INT_MAX ((1 << (sizeof(int)*8 - 2)) - 1 + (1 << (sizeof(int)*8 - 2)))
I do not see any overflow in the shifts nor additions. Neither do I see UB. I suppose one could use CHAR_BIT
instead of 8.
In 1 and 2's compliment the max int would be power(2,(sizeof(int)*Bits_per_byte - 1) - 1
. Instead of power, we'll use shift, but we can't shift too much at once. So we form power(2,(sizeof(int)*Bits_per_byte - 1)
by doing half of it twice. But overflow is a no-no, so subtract 1 before adding the 2nd half rather then at the end. I've used lots of ()
to emphasize evaluation order.
As pointed out by @caf, this method fails is there are padding bits - uncommon but possible.
Computing INT_MIN is a little trickier to make work in 2's and 1's compliment, but a similar approach would work, but that is another question.
Upvotes: 0