Reputation: 147
I have just started python tutorials. Why does
print int (100*(11.20 - 11))
print 19
instead of 20
in python ?? There is some sort of rounding off done by integer but i am unable to catch that. Help is appreciated.
Upvotes: 2
Views: 213
Reputation: 22841
In Python int()
truncates toward 0 for reasons Guido explains here. This means negative numbers will actually see the opposite effect (related SO question & answer).
Upvotes: 1
Reputation: 80325
This happens because 11.20
is not exactly 1120/100 but slightly less than that. So 11.20 - 11
ends up slightly less than 20/100 (*). Multiplied by 100
, 11.20 - 11
produces a floating-point number slightly below 20, and since int(...)
truncates (rounds towards zero), the result is 19.
For efficiency reasons, floating-point numbers are represented as fractions with a power of two as the denominator. So 1/2, 1/4, 1/16, … can be represented exactly as floating-point numbers, but 1/5 cannot (there is no power of two that can be used as the denominator to represent 1/5 exactly).
(*) This floating-point subtraction happens to be exact, so that the absolute error of the result compared to 20/100 is exactly the same as the absolute error of 11.20 compared to 1120/100. Note that the relative error is magnified, because the same absolute error is applied to a much smaller value. The subtraction of values that are already inexact and are close is called “catastrophic cancellation”, and the subtraction in your Python line is one such catastrophic cancellation.
Upvotes: 7
Reputation: 39406
As said, it is a rounding issue with floating points precision. You can see that using :
print '%.20f' % (100*(11.20 - 11))
>>> 19.99999999999992894573
Upvotes: 0
Reputation: 572
11.2 - 11
actually yields
0.1999999999999993
which rounds down to 19.
Upvotes: 4