Reputation: 1130
In C++,
double x = 1.0;
double y = x / 3.0;
if (x == y * 3.0)
cout << " They are equal!" ;
else
cout << " They are NOT equal." ;
will print
‘They are NOT equal.’
as expected, due to the non-exact representation of 1/3 as a (binary) number of finite sized mantissa. But in Python (on https://repl.it/repls/MessyJaggedMouse),
x = 1.0
y = x / 3.0
x == y * 3.0
prints
True
When and why does Python depart from the above expected behavior? Edit: Python doesn’t depart -see my answer below.
Upvotes: 2
Views: 419
Reputation: 1130
Python ‘floats’ use double
precision 52 bit mantissas:
> (1/3).hex()
‘0x1.5555555555555p-2’ # 13 5’s, 13*4==52
This is 1/3 in binary:
1/3 == 0.0101010101...
52 is an even number and therefore the mantissa(part after the first 1) stored is
010101...
in 52 bits, so ends in ...0101
. So the ieee 754 rounding does not change the 52 bit mantissa, and multyplying by 3 , or binary 11
, gives 0.11111...
which rounds to 1
. Therefore
1.0 / 3.0 * 3.0 == 1.0
in ieee 754, and therefore in python and c++ double
s as pointed out by @Dietrich.
The following code allows to explore the story for float
s which have 23 bits (on my system), which is an odd number...
Note that the rounding to 23 bits gets back to 1, but the automatic promotion from float
to double
in statements like 3.0 * x == 1
will prevent the rounding back to 1.
include <iostream> using namespace std;
int main() {
float x = 1./3, y;
cout << hexfloat;
cout << x << endl; // 1./3. ieee rounded to 23 bits
cout << x * 3.0 << endl; // that times 3.0 exact, ie, promoted to double
cout << (double)x * 3. << endl; // same
cout << x * (float)3.0 << endl; // remains 23 bits, rounded to 1
y = 3.0 * x; // 0x1.0000008p+0 static_cast to 23 bit float
cout << y << endl; // again yields 1
cout << boolalpha; // Now in terms of equalities:
cout << (3.0*x==1.) << endl;
cout << ((float)3.*x==1.) << endl;
cout << (y==1.); return 0;
}
yields
0x1.555556p-2 // rounded to 1.0101...01011
0x1.0000008p+0 // that times 3.0
0x1.0000008p+0
0x1p+0 // now rounds to 1.00...
0x1p+0
false
true
true
Excercise: Explore 49.0. :)
Upvotes: 0
Reputation: 213568
This happens to me only when I use x87 math in C.
Correctly rounded, using IEEE 754 double arithmetic you will get true.
However, if you compute intermediate values at a higher precision, you can get false. C is not required to compute all intermediate values at 64-bit precision, and on 32-bit x86 processors with x87 floating-point instructions, you will end up using the larger 80-bit floating point types for intermediate values. Depending on the optimization settings enabled and the compiler details, the computation will be done differently (with different intermediate values) and you will get slightly different results.
#include <cstdio>
int main() {
double x = 1.0;
double y = x / 3.0;
std::printf("x == y * 3.0: %s\n", x == y * 3.0 ? "true" : "false");
return 0;
}
With GCC, I see false
if you compile with -mfpmath=387 -O0
. I see true
if I compile without -mfpmath=387
(this will default to SSE instead on AMD64) or if I compile with -O2
.
You can see how it is compiled using x87 instructions on GodBolt: https://godbolt.org/z/rf1Rir -- Try adding -O2
or getting rid of -mfpmath=387
to see how it affects the code generated.
Note that it is a bit of a coincidence that (1.0 / 3.0) * 3.0 == 1.0
. You can test the following code in Python, for example:
1.0 / 49.0 * 49.0 == 1.0
This should give False
.
Upvotes: 6