Ashutosh Tiwari
Ashutosh Tiwari

Reputation: 307

Finding SHRT_MAX on systems without <limits.h> or <values.h>

I am reading The C++ Answer Book by Tony L Hansen. It says somewhere that the value of SHRT_MAX (the largest value of a short) can be derived as follows:

const CHAR_BIT=           8;
#define BITS(type)       (CHAR_BIT*(int)sizeof(type))
#define HIBIT(type)      ((type)(1<< (BITS(type)-1)))
#define TYPE_MAX(type)   ((type)~HIBIT(type));
const SHRT_MAX=          TYPE_MAX(short);

Could someone explain in simple words what is happening in the above 5 lines?

Upvotes: 1

Views: 653

Answers (3)

user743382
user743382

Reputation:

const CHAR_BIT=           8;

Assuming int is added here (and below): CHAR_BIT is the number of bits in a char. Its value is assumed here without checking.

#define BITS(type)       (CHAR_BIT*(int)sizeof(type))

BITS(type) is the number of bits in type. If sizeof(short) == 2, then BITS(short) is 8*2.

Note that C++ does not guarantee that all bits in integer types other than char contribute to the value, but the below will assume that nonetheless.

#define HIBIT(type)      ((type)(1<< (BITS(type)-1)))

If BITS(short) == 16, then HIBIT(short) is ((short)(1<<15)). This is implementation-dependent, but assumed to have the sign bit set, and all value bits zero.

#define TYPE_MAX(type)   ((type)~HIBIT(type));

If HIBIT(short) is (short)32768, then TYPE_MAX(short) is (short)~(short)32768. This is assumed to have the sign bit cleared, and all value bits set.

const SHRT_MAX=          TYPE_MAX(short);

If all assumptions are met, if this indeed has all value bits set, but not the sign bit, then this is the highest value representable in short.


It's possible to get the maximum value more reliably in modern C++ when you know that:

  • the maximum value for an unsigned type is trivially obtainable
  • the maximum value for a signed type is assuredly either equal to the maximum value of the corresponding unsigned type, or that value right-shifted until it's in the signed type's range
  • a conversion of an out-of-range value to a signed type does not have undefined behaviour, but instead gives an implementation-defined value in the signed type's range:
template <typename S, typename U>
constexpr S get_max_value(U u) {
    S s = u;
    while (s < 0 || s != u)
        s = u >>= 1;
    return u;
}

constexpr unsigned short USHRT_MAX = -1;
constexpr short SHRT_MAX = get_max_value<short>(USHRT_MAX);

Upvotes: 2

Tom Zych
Tom Zych

Reputation: 13586

Taking it one line at a time:

const CHAR_BIT=           8;

Declare and initialize CHAR_BIT as a variable of type const int with value 8. This works because int is the default type (wrong: see comments below), though it’s better practice to specify the type.

#define BITS(type)       (CHAR_BIT* (int)sizeof(type))

Preprocessor macro, converting a type to the number of bits in that type. (The asterisk isn’t making anything a pointer, it’s for multiplication. Would be clearer if the author had put a space before it.)

#define HIBIT(type)      ((type)(1<< (BITS(type)-1)))

Macro, converting a type to a number of that type with the highest bit set to one and all other bits zero.

#define TYPE_MAX(type)   ((type)~HIBIT(type));

Macro, inverting HIBIT so the highest bit is zero and all others are one. This will be the maximum value of type if it’s a signed type and the machine uses two’s complement. The semicolon shouldn’t be there, but it will work in this code.

const SHRT_MAX=          TYPE_MAX(short);

Uses the above macros to compute the maximum value of a short.

Upvotes: 0

Acorn
Acorn

Reputation: 26146

Reformatting a bit:

const CHAR_BIT = 8;

Invalid code in C++, it looks like old C code. Let's assume that const int was meant.

#define BITS(type)       (CHAR_BIT * (int)sizeof(type))

Returns the number of bits that a type takes assuming 8-bit bytes, because sizeof returns the number of bytes of the object representation of type.

#define HIBIT(type)      ((type) (1 << (BITS(type) - 1)))

Assuming type is a signed integer in two's complement, this would return an integer of that type with the highest bit set. For instance, for a 8-bit integer, you would get 1 << (8 - 1) == 1 << 7 == 0b10000000 == -1.

#define TYPE_MAX(type)   ((type) ~HIBIT(type));

The bitwise not of the previous thing, i.e. flips each bit. Following the same example as before, you would get ~0b10000000 == 0b01111111 == 127.

const SHRT_MAX = TYPE_MAX(short);

Again invalid, both in C and C++. In C++ due to the missing int, in C due to the fact that CHAR_BIT is not a constant expression. Let's assume const int. Uses the previous code to get the maximum of the short type.

Upvotes: 0

Related Questions