Reputation: 13
>>> def my_max(x,y):
return ( x + y + abs(x - y)) / 2
>>> my_max(-894,2.3)
2.2999999999999545
>>> my_max(34,77)
77.0
>>> my_max(0.1,0.01)
0.1
>>> my_max(-0.1 , 0.01)
0.009999999999999995
I am just playing around with python and i made this function that it sometimes works and others it just gets close to the awnser
I know it has to do with floating-point errors, but why would work for some inputs and not in others??
Upvotes: 0
Views: 59
Reputation: 8025
Easier to test this out when you separate the function:
def m(x, y):
first = x + y
second = abs(x - y)
third = first + second
fourth = third / 2
print("x+y\t\t\t", first)
print("abs(x-y)\t\t", second)
print("x+y + abs(x-y)\t\t", third)
print("(x+y + abs(x-y))/2\t", fourth)
m(-894, 2.3)
You receive the following outputs:
x+y -891.7
abs(x-y) 896.3
x+y + abs(x-y) 4.599999999999909
(x+y + abs(x-y))/2 2.2999999999999545
Now looking at x+y + abs(x-y)
we have the following:
var = -891.7 + 896.3
print(var)
Which outputs:
4.599999999999909
This should, of course, be 4.6
, but what is happening can be referred from Python's documentation here:
Note that this is in the very nature of binary floating-point: this is not a bug in Python, and it is not a bug in your code either. You’ll see the same kind of thing in all languages that support your hardware’s floating-point arithmetic (although some languages may not display the difference by default, or in all output modes).
You can resolve this by utilizing the decimal
library that comes with Python:
from decimal import *
getcontext().prec = 10
var = Decimal(-891.7) + Decimal(896.3)
print(var)
outputs:
4.600000000
In this case, your precision can be as large as 13 for it to correctly output a variation of 4.6
. Increase it to 14 or larger and you will notice you will once again receive your 4.59....
.
Upvotes: 1