Sanjeev Penupala
Sanjeev Penupala

Reputation: 11

Why is precision of decimal points different between multiplication and division of the same math statement?

# Used "==" and "is" for equality
print(48 / 10 == 48 * 0.1)
print(48 / 10 is 48 * 0.1)
print()

# Check Both Answers Individually
print(f"48 / 10 = {48 / 10}")
print(f"48 * 0.1 = {48 * 0.1}")

In the following code, I thought both expressions would equal each other, but they do not. This is what it outputs.

False
False

48 / 10 = 4.8
48 * 0.1 = 4.800000000000001

Can someone explain why there is a difference in the result, whether you are using division or multiplication?

Upvotes: 0

Views: 512

Answers (1)

Joni
Joni

Reputation: 111239

Even though multiplying by 0.1 and dividing by 10 are mathematically equal, in a computer program they are not.

When you write 0.1 in Python, what you get is not one tenth. It's slightly more. Here's 0.1 to 40 digits of precision:

>>> "%0.40f" % 0.1
'0.1000000000000000055511151231257827021182'

The difference is due to the fact that floating point numbers use a binary representation. You cannot represent 0.1 exactly with a finite number of binary digits, just like you can't represent 1/3 = 0.333333... exactly with a finite number of decimal digits.

Since 0.1 is slightly more than one tenth, you should expect 48*0.1 to be slightly more than 4.8. And it is:

>>> "%0.40f" % (48*0.1)
'4.8000000000000007105427357601001858711243'

You shouldn't expect 4.8 to equal 4.8 either. It's in fact slightly less than 4.8:

>>> "%0.40f" % (4.8)
'4.7999999999999998223643160599749535322189'

Upvotes: 1

Related Questions