Reputation: 3669
If I compare two floating-point numbers, are there cases where a>=b
is not equivalent to b<=a
and !(a<b)
, or where a==b
is not equivalent to b==a
and !(a!=b)
?
In other words: are comparisons always "symmetrical", such that I can get the same result on a comparison by swapping the operands and mirroring the operator? And are they always "negatable", such that negating an operator (e.g. >
to <=
) is equivalent to to applying a logical NOT (!
) to the result?
Upvotes: 6
Views: 824
Reputation: 106247
Assuming IEEE-754 floating-point:
a >= b
is always equivalent to b <= a
.*a >= b
is equivalent to !(a < b)
, unless one or both of a
or b
is NaN.a == b
is always equivalent to b == a
.*a == b
is equivalent to !(a != b)
, unless one or both of a
or b
is NaN.More generally: trichotomy does not hold for floating-point numbers. Instead, a related property holds [IEEE-754 (1985) §5.7]:
Four mutually exclusive relations are possible: less than, equal, greater than, and unordered. The last case arises when at least one operand is NaN. Every NaN shall compare unordered with everything, including itself.
Note that this is not really an "anomaly" so much as a consequence of extending the arithmetic to be closed in a way that attempts to maintain consistency with real arithmetic when possible.
[*] true in abstract IEEE-754 arithmetic. In real usage, some compilers might cause this to be violated in rare cases as a result of doing computations with extended precision (MSVC, I'm looking at you). Now that most floating-point computation on the Intel architecture is done on SSE instead of x87, this is less of a concern (and it was always a bug from the standpoint of IEEE-754, anyway).
Upvotes: 9
Reputation: 7201
Aside from the NaN issue, which is somewhat analogous to NULL in SQL and missing values in SAS and other statistical packages, there is always the problem of floating point arithmetic accuracy. Repeating values in the fractional part (1/3, for example) and irrational numbers cannot be represented accurately. Floating point arithmatic often truncates results because of the finite limit in precision. The more arithematic you do with a floating point value, the larger the error that creeps in.
Probably the most useful way to compare floating point values would be with an algorithm:
Note that comparing for "<=" or ">=" has the same risk as comparison for precise equality.
Upvotes: 2
Reputation: 46
The set of IEEE-754 floating-point numbers are not ordered so some relational and boolean algebra you are familiar with no longer holds. This anomaly is caused by NaN which has no ordering with respect to any other value in the set including itself so all relational operators return false. This is exactly what Mark Byers has shown.
If you exclude NaN then you now have an ordered set and the expressions you provided will always be equivalent. This includes the infinities and negative zero.
Upvotes: 3
Reputation: 838746
In Python at least a>=b
is not equivalent to !(a<b)
when there is a NaN involved:
>>> a = float('nan')
>>> b = 0
>>> a >= b
False
>>> not (a < b)
True
I would imagine that this is also the case in most other languages.
Another thing that might surprise you is that NaN doesn't even compare equal to itself:
>>> a == a
False
Upvotes: 3
Reputation: 17268
Nope, not for any sane floating point implementation: basic symmetry and boolean logic applies. However, equality in floating point numbers is tricky in other ways. There are very few cases where testing a==b
for floats is the reasonable thing to do.
Upvotes: 1