Alan
Alan

Reputation: 1

Why is Signed Overflow due to computation still Undefined Behavior in C++20

I came to know through this answer that:

Signed overflow due to computation is still undefined behavior in C++20 while Signed overflow due to conversion is well defined in C++20(which was implementation defined for Pre-C++20).

And this change in the signed overflow due to conversion is because that from C++20 compilers are required use 2's complement.

My question is:

If compilers are required to use 2's complement from C++20, then why isn't signed overflow due to computation well-defined just like for signed overflow due to conversion?

That is, why(how) is there a difference between overflow due to computation and overflow due to conversion. Essentially, why these two kinds of overflows treated differently.

Upvotes: 10

Views: 1995

Answers (2)

Adrian McCarthy
Adrian McCarthy

Reputation: 48021

Based on JF Bastien's 2018 CppCon talk:

Many (most?) integer overflows are bugs—not just because they open the door to undefined behavior, but also because, even if the overflow was defined to wrap, the code would still be wrong.

Your compiler and other tools could help you find these bugs by trapping on overflow, which is allowed because overflow is UB, so the compiler can do whatever it wants. If the behavior were defined, the compiler wouldn't have the flexibility to help.

Thus the standard declares that the representation shall be two's complement. It does not require define how the arithmetic operations should behave when there's an overflow because there is no good solution that works for everyone.

If the standard were to define overflow behavior, how should it define it? Many would want/expect wrapping, but others would find trapping more useful, and saturation can be useful in several domains. Since programmers can make C++ classes that behave like arithmetic types, you could have a library of integer-like types that implement whatever overflow policy you like. If you know that your code will never overflow, why should you pay the overhead of any of those behaviors?

Upvotes: 1

eerorika
eerorika

Reputation: 238431

If non-two's-complement support had been the only concern, then signed arithmetic overflow could have been defined as having implementation defined result, just like converting an integer has been defined. There are reasons why it is UB instead, and those reasons haven't changed, nor have the rules of signed arithmetic overflow changed.

In case of any UB, there are essentially two primary reasons for it to exist:

  • Portability. Different systems behave in different ways and UB allows supporting all systems in an optimal way. In this case as Martin Rosenau mentions in a comment, there are systems that don't simply produce a "wrong" value.
  • Optimisation. UB allows a compiler to assume that it doesn't happen, which allows for optimisations based on that assumption. Jarod42 shows an example in a comment. Another example is that with UB overflow, it is possible to deduce that adding two positive numbers never produces a negative number, nor a number that is smaller than either of the positive numbers.

Upvotes: 12

Related Questions