Reputation: 6187
I have a float that can have arbitrary precision, and a minimum step size that indicates the minimum amount of this number can be increased / decreased by:
num = 3.56891211101
min_step = 0.005
I'd like to have a function that takes this num
and the step_size
and rounds the num
to that given min_step
. So in this case the result would be 3.570
.
I attempted this:
num = 3.56891211101
min_step = 0.005
def myround(x, base):
return base * round(x / base)
x = myround(num, min_step)
print(x)
>>> 3.5700000000000003
...it's close, but not quite. I would like the output to be the same as if:
y = 3.570
print(y)
>>> 3.57
What's a simple way of accomplishing this?
I'm on Python 3.8
Upvotes: 0
Views: 1192
Reputation: 148890
Most Python implementations (including the CPython reference implementation) use IEE 754 floating point numbers. As a result, they are not accurate for decimal values.
The canonic way is to use the decimal module:
from decimal import Decimal, Context
num = 3.56891211101
c = Context(prec=3)
x= c.create_decimal(num)
print(x)
gives as expected
3.57
Upvotes: 1
Reputation: 6187
I solved it with:
def myround(x, base):
decimal_places = str(base)[::-1].find('.')
precise = base * round(x / base)
return round(precise, decimal_places)
x = myround(num, min_step)
print(x)
>>> 3.57
y = 3.570
print(y)
>>> 3.57
Hope its helpful to others.
Upvotes: 0