Reputation: 141598
For floating point values, is it guaranteed that a + b
is the same as1 b + a
?
I believe this is guaranteed in IEEE754, however the C++ standard does not specify that IEEE754 must be used. The only relevant text seems to be from [expr.add]#3:
The result of the binary + operator is the sum of the operands.
The mathematical operation "sum" is commutative. However, the mathematical operation "sum" is also associative, whereas floating point addition is definitely not associative. So, it seems to me that we cannot conclude that the commutativity of "sum" in mathematics means that this quote specifies commutativity in C++.
Footnote 1:
"Same" as in bitwise identical, like memcmp
rather than ==
, to distinguish +0 from -0. IEEE754 treats +0.0 == -0.0
as true, but also has specific rules for signed zero. +0 + -0
and -0 + +0
both produce +0
in IEEE754, same for addition of opposite-sign values with equal magnitude. An ==
that followed IEEE semantics would hide non-commutativity of signed-zero if that was the criterion.
Also, a+b == b+a
is false with IEEE754 math if either input is NaN.
memcmp
will say whether two NaNs have the same bit-pattern (including payload), although we can consider NaN propagation rules separately from commutativity of valid math operations.
Upvotes: 51
Views: 11205
Reputation: 364308
Addition at any given precision is commutative except for NaN payloads, for a C++ implementation using IEEE FP math.
Marc Glisse comments:
For builtin types, gcc will swap the operands of + without any particular precaution.
Finite inputs with non-zero results are the simple case, obviously commutative. Addition is one of the "basic" FP math operations so IEEE754 requires the result to be "correctly rounded" (rounding error <= 0.5 ulp), so there's only one possible numerical result, and only one bit-pattern that represents it.
Non-IEEE FP math may allow larger rounding errors (e.g. allowing off-by-one in the LSB of the mantissa, so rounding error <= 1 ulp). It could conceivably be non-commutative with the final result depending on which operand is which. I think most people would consider this a bad design, but C++ probably doesn't forbid it.
If the result is zero (finite inputs with same magnitudes but opposite signs), it's always +0.0
in IEEE math. (Or -0.0
in the roundTowardNegative rounding mode). This rule covers the case of +0 + (-0.0)
and the reverse both producing +0.0
. See What is (+0)+(-0) by IEEE floating point standard?
Inputs with different magnitudes can't underflow to zero, unless you have subnormal inputs for an FPU operating in flush-to-zero mode (subnormal outputs are rounded toward zero). In that case you can get -0.0
as a result if the exact result was negative. But it's still commutative.
Addition can produce -0.0 from -0 + -0
, which is trivially commutative because both inputs are the same value.
-Inf + anything finite is -Inf. +Inf + anything finite is +Inf. +Inf + -Inf is NaN. Neither of these depends on order.
NaN + anything or anything + NaN is NaN. "payload" (mantissa) of the NaN depends on the FPU. IIRC, keeping the payload of the previous NaN.
NaN + NaN produces NaN. If I recall, nothing specifies which NaN payload is kept, or if a new payload could be invented. Hardly anyone does anything with NaN payloads to track where they came from, so this is not a big deal.
Both inputs to +
in C++ will be promoted to matching types. Specifically to the wider of the two input types if they don't already match. So there's no asymmetry of types.
For a+b == b+a
on its own, that can be false for NaNs because of IEEE ==
semantics (not because of +
semantics), same as a+b == a+b
.
With strict FP math (no extra precision kept between C statements, e.g. gcc -ffloat-store
if using legacy x87 math on x86), I think that equality is equivalent to !isunordered(a,b)
which tests if either of them are NaN.
Otherwise it's possible that a compiler could CSE with earlier code for one but not the other and have one of them evaluated with higher-precision values of a
and b
. (Strict ISO C++ requires that high-precision temporaries only exist within expressions even for FLT_EVAL_METHOD==2 (like x87), not across statements, but gcc
by default doesn't respect that. Only with g++ -std=c++03
or whatever instead of gnu++20
, or with -ffloat-store
for x87 specifically.)
On a C++ implementation with FLT_EVAL_METHOD == 0
(no extra precision for temporaries within an expression), this source of optimization differences wouldn't be a factor.
Upvotes: 3
Reputation:
It is not even required that a + b == a + b
. One of the subexpressions may hold the result of the addition with more precision than the other one, for example when the use of multiple additions requires one of the subexpressions to be temporarily stored in memory, when the other subexpression can be kept in a register (with higher precision).
If a + b == a + b
is not guaranteed, a + b == b + a
cannot be guaranteed. If a + b
does not have to return the same value each time, and the values are different, one of them necessarily will not be equal to one particular evaluation of b + a
.
Upvotes: 25
Reputation: 490158
The C++ standard very specifically does not guarantee IEEE 754. The library does have some support for IEC 559 (which is basically just the IEC's version of the IEEE 754 standard), so you can check whether the underlying implementation uses IEEE 754/IEC 559 though (and when it does, you can depend on what it guarantees, of course).
For the most part, the C and C++ standards assume that such basic operations will be implemented however the underlying hardware works. For something as common as IEEE 754, they'll let you detect whether it's present, but still don't require it.
Upvotes: 12
Reputation: 137810
No, the C++ language generally wouldn't make such a requirement of the hardware. Only the associativity of operators is defined.
All kinds of crazy things do happen in floating-point arithmetic. Perhaps, on some machine, adding zero to an denormal number produces zero. Conceivable that a machine could avoid updating memory in the case of adding a zero-valued register to a denormal in memory. Possible that a really dumb compiler would always put the LHS in memory and the RHS in a register.
Note, though, that a machine with non-commutative addition would need to specifically define how expressions map to instructions, if you're going to have any control over which operation you get. Does the left-hand side go into the first machine operand or the second?
Such an ABI specification, mentioning the construction of expressions and instructions in the same breath, would be quite pathological.
Upvotes: 22