Reputation: 11
print(0.5+0.5+0.5+0.5+1e+16): 1.0000000000000002e+16
print(1e+16+0.5+0.5+0.5+0.5): 1e+16
Why these two answers differ?
Upvotes: 1
Views: 80
Reputation: 43533
It is because how Python parses the expressions.
Let's use the ast
module to see:
In [1]: import ast
In [2]: print(ast.dump(ast.parse("0.5+0.5+0.5+0.5+1e+16"), indent=4))
Module(
body=[
Expr(
value=BinOp(
left=BinOp(
left=BinOp(
left=BinOp(
left=Constant(value=0.5),
op=Add(),
right=Constant(value=0.5)),
op=Add(),
right=Constant(value=0.5)),
op=Add(),
right=Constant(value=0.5)),
op=Add(),
right=Constant(value=1e+16)))],
type_ignores=[])
The code above basically first adds all the 0.5
values (yielding 2.0
) and then adds that to 1e16
.
In [6]: 1e16+2
Out[6]: 1.0000000000000002e+16
In [3]: print(ast.dump(ast.parse("1e+16+0.5+0.5+0.5+0.5"), indent=4))
Module(
body=[
Expr(
value=BinOp(
left=BinOp(
left=BinOp(
left=BinOp(
left=Constant(value=1e+16),
op=Add(),
right=Constant(value=0.5)),
op=Add(),
right=Constant(value=0.5)),
op=Add(),
right=Constant(value=0.5)),
op=Add(),
right=Constant(value=0.5)))],
type_ignores=[])
The other code adds 0.5
to 1e16
four times.
However, this doesn't work because 0.5
is too small to change 1e16
:
In [7]: 1e16+0.5 == 1e16
Out[7]: True
This basically has to do with the nature of floating point numbers. Look at sys.float_info.dig
; this will tell you how many digits the float
type can handle on your machine:
In [23]: import sys
In [24]: sys.float_info.dig
Out[24]: 15
In a case like this, consider using decimal.Decimal
numbers:
In [21]: from decimal import Decimal
In [22]: Decimal("1e16")+Decimal("0.5")+Decimal("0.5")+Decimal("0.5")+Decimal("0.5")
Out[22]: Decimal('10000000000000002.0')
Upvotes: 1