Reputation: 16142
I'm working on the math program and I have a quite big problem with round. So after my program did some math, it rounds the result.
Everything works fine but if the result == 2.49999999999999992
, round function return 3.0
instead of 2.0
.
How can I fix that?
Thanks.
Upvotes: 3
Views: 250
Reputation: 16325
As @Pavel Anossov says in his comment there's no such thing as 2.49999999999999992 in IEEE 754, 2.49999999999999992 == 2.5.
. Float might always be critical for your calculations, because in any case (32/64/128 bit float), you have a precision limit. This is obviously also limited for Python floats.
There are different options to deal with that, you could e.g. use the decimal library:
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')
It's possible to set the precision yourself in that case. decimal is in the standard library.
There are also third party libraries like bigfloat, that you could use (I have no experience with it):
>>> from bigfloat import *
>>> sqrt(2, precision(100)) # compute sqrt(2) with 100 bits of precision
But as you can see, you always have to choose a precision. If you really don't want to lose any kind of precision, use fractions (also in the standard library):
>>> from fractions import Fraction
>>> a = Fraction(16, -10)
>>> a
Fraction(-8, 5)
>>> a / 23
Fraction(-8, 115)
>>> float(a/23)
-0.06956521739130435
Upvotes: 5
Reputation: 15887
The reason is that Python's float
type (typically IEEE 754 double precision floating point numbers) does not have such a value as 2.49999999999999992. Floating point numbers are generally on the form mantissa*base**exponent
, and in Python you can find the limits for float
in particular within sys.float_info
. For starters, let's calculate how many digits the mantissa itself can hold:
>>> from sys import float_info
>>> print float_info.radix**float_info.mant_dig # How big can the mantissa get?
9007199254740992
>>> print "2.49999999999999992"
2.49999999999999992
>>> 2.49999999999999992
2.5
Clearly the number we've entered is wider. Just how close can we go before things go wrong?
>>> print 2.5*float_info.epsilon
5.55111512313e-16
e-16
here means *10**-16
, so let's reformat that for comparison:
>>> print "%.17f"%(2.5*float_info.epsilon); print "2.49999999999999992"
0.00000000000000056
2.49999999999999992
This indicates that at a magnitude around 2.5, differences lower than about 5.6e-16 (including this 8e-17) will be lost to the storage itself. Therefore this value is 2.5, which rounds up.
We can also calculate an estimate of how many significant digits we can use:
>>> import math, sys
>>> print math.log10(sys.float_info.radix**sys.float_info.mant_dig)
15.9545897702
Very nearly, but not quite, 16. In binary the first digit will always be 1, so we can have a known number of significant digits (mant_dig), but in decimal the first digit will consume between one and four bits. This means the last digit may be off by more than one. Usually we hide this by printing only with a limited precision, but it actually occurs to lots of numbers:
>>> print '%f = %.17f'%(1.1, 1.1)
1.100000 = 1.10000000000000009
>>> print '%f = %.17f'%(10.1, 10.1)
10.100000 = 10.09999999999999964
Such is the inherent imprecision of floating point numbers. Types like bigfloat
, decimal
and fractions
(thanks to David Halter for these examples) can push the limits around, but if you start looking at many digits you need to be aware of them. Also note that this is not unique to computers; an irrational number, such as pi or sqrt(2), cannot be exactly written in any integer base.
Upvotes: 2