user9723177
user9723177

Reputation:

Is it possible to test whether a type supports negative zero in C++ at compile time?

Is there a way to write a type trait to determine whether a type supports negative zero in C++ (including integer representations such as sign-and-magnitude)? I don't see anything that directly does that, and std::signbit doesn't appear to be constexpr.

To clarify: I'm asking because I want to know whether this is possible, regardless of what the use case might be, if any.

Upvotes: 8

Views: 680

Answers (4)

phuclv
phuclv

Reputation: 41962

The standard std::signbit function in C++ has a constructor that receives an integral value

  • bool signbit( IntegralType arg ); (4) (since C++11)

So you can check with static_assert(signbit(-0)). However there's a footnote on that (emphasis mine)

  1. A set of overloads or a function template accepting the arg argument of any integral type. Equivalent to (2) (the argument is cast to double).

which unfortunately means you still have to rely on a floating-point type with negative zero. You can force the use of IEEE-754 with signed zero with std::numeric_limits<double>::is_iec559

Similarly std::copysign has the overload Promoted copysign ( Arithmetic1 x, Arithmetic2 y ); that can be used for this purpose. Unluckily both signbit and copysign are not constexpr according to the current standards although there are some proposals to do that

Yet Clang and GCC can already consider those constexpr if you don't want to wait for the standard to update. Here's their results


Systems with a negative zero also have a balanced range, so can just check if the positive and negative ranges have the same magnitude

if constexpr(-std::numeric_limits<int>::max() != std::numeric_limits<int>::min() + 1) // or
if constexpr(-std::numeric_limits<int>::max() == std::numeric_limits<int>::min())
    // has negative zero

In fact -INT_MAX - 1 is also how libraries defined INT_MIN in two's complement

But the simplest solution would be eliminating non-two's complement cases, which are pretty much non-existent nowadays

static_assert(-1 == ~0, "This requires the use of 2's complement");

Related:

Upvotes: 1

Joshua
Joshua

Reputation: 43327

Somebody's going to come by and point out this is all-wrong standards-wise.

Anyway, decimal machines aren't allowed anymore and through the ages there's been only one negative zero. As a practical matter, these tests suffice:

INT_MIN == -INT_MAX && ~0 == 0

but your code doesn't work for two reasons. Despite what the standard says, constexprs are evaluated on the host using host rules, and there exists an architecture where this crashes at compile time.

Trying to massage out the trap is not possible. ~(unsigned)0 == (unsigned)-1 reliably tests for 2s compliment, so it's inverse does indeed check for one's compliment*; however, ~0 is the only way to generate negative zero on ones compliment, and any use of that value as a signed number can trap so we can't test for its behavior. Even using platform specific code, we can't catch traps in constexpr, so forgetaboutit.

*barring truly exotic arithmetic but hey

Everybody uses #defines for architecture selection. If you need to know, use it.

If you handed me an actually standards complaint compiler that yielded a compile error on trap in a constexpr and evaluated with target platform rules rather than host platform rules with converted results, we could do this:

target.o: target.c++
    $(CXX) -c target.c++ || $(CC) -DTRAP_ZERO -c target.c++

bool has_negativezero() {
#ifndef -DTRAP_ZERO
        return INT_MIN == -INT_MAX && ~0 == 0;
#else
        return 0;
#endif
}

Upvotes: 2

Michael Veksler
Michael Veksler

Reputation: 8475

The best one can do is to rule out the possibility of signed zero at compile time, but never be completely positive about its existence at compile time. The C++ standard goes a long way to prevent checking binary representation at compile time:

  • reinterpret_cast<char*>(&value) is forbidden in constexpr.
  • using union types to circumvent the above rule in constexpr is also forbidden.
  • Operations on zero and negative zero of integer types behave exactly the same, per-c++ standard, with no way to differentiate.
  • For floating-point operations, division by zero is forbidden in a constant expression, so testing 1/0.0 != 1/-0.0 is out of the question.

The only thing one can test is if the domain of an integer type is dense enough to rule-out signed zero:

template<typename T>
constexpr bool test_possible_signed_zero()
{
    using limits = std::numeric_limits<T>;
    if constexpr (std::is_fundamental_v<T> &&
           limits::is_exact &&
           limits::is_integer) {
        auto low = limits::min();
        auto high = limits::max();
        T carry = 1;
        // This is one of the simplest ways to check that
        // the max() - min() + 1 == 2 ** bits
        // without stepping out into undefined behavior.
        for (auto bits = limits::digits ; bits > 0 ; --bits) {
            auto adder = low % 2 + high %2 + carry;
            if (adder % 2 != 0) return true;
            carry = adder / 2;
            low /= 2;
            high /= 2;
        }
        return false;
    } else {
        return true;
    }
}

template <typename T>
class is_possible_signed_zero:
 public std::integral_constant<bool, test_possible_signed_zero<T>()>
{};
template <typename T>
constexpr bool is_possible_signed_zero_v = is_possible_signed_zero<T>::value;

It is only guaranteed that if this trait returns false then no signed zero is possible. This assurance is very weak, but I can't see any stronger assurance. Also, it says nothing constructive about floating point types. I could not find any reasonable way to test floating point types.

Upvotes: 2

Serge Ballesta
Serge Ballesta

Reputation: 149155

Unfortunately, I cannot imagine a way for that. The fact is that C standard thinks that type representations should not be a programmer's concern (*), but is only there to tell implementors what they should do.

As a programmer all you have to know is that:

  • 2-complement is not the only possible representation for negative integer
  • negative 0 could exist
  • an arithmetic operation on integers cannot return a negative 0, only bitwise operation can

(*) Opinion here: Knowing the internal representation could lead programmers to use the old good optimizations that blindly ignored the strict aliasing rule. If you see a type as an opaque object that can only be used in standard operations, you will have less portability questions...

Upvotes: 3

Related Questions