Adrian Pereira
Adrian Pereira

Reputation: 5

Confusion in working of Decimal and Round function

I am trying to understand how the round and decimal function of Python actually works. For this I tried to run this code:

from decimal import Decimal print("An integer %i and a float %.1f" %(23, 6.549)) print Decimal(6.549) print Decimal(45.0)/Decimal(7) print Decimal(45.0/7)

I got the output:

An integer 23 and a float 6.5
6.54900000000000037658764995285309851169586181640625
6.428571428571428571428571429
6.4285714285714288251938342000357806682586669921875
  1. I tried to understand the working of round function from the documentation here, it describes that 6.549 is converted to a binary floating point number and is replaced by a binary approximation. I am confused about where does the approximation in extended value from Decimal(6.549) comes from. Also, can I assume that rounding off of every decimal value will work in the same way or will it depend on the individual binary approximation?

  2. When I use the Decimal function in two different ways, it gives a slightly different value. Also, one value has more precision than the other. Can someone please specify the reason for this?

Thanks in advance.

Upvotes: 0

Views: 136

Answers (1)

Simon Byrne
Simon Byrne

Reputation: 7874

I am confused about where does the approximation in extended value from Decimal(6.549) comes from.

The literal 6.549 is first converted to a floating point number, which is stored in binary: as 6.549 cannot be stored exactly in this format, it is rounded to the nearest value which is in fact

6.54900000000000037658764995285309851169586181640625

(see Rick Regan's floating point converter for more details).

To avoid this intermediate rounding, pass a string instead:

>>> Decimal("6.549")
Decimal('6.549')

Also, can I assume that rounding off of every decimal value will work in the same way or will it depend on the individual binary approximation?

Yes, all numeric literals will have this problem (unless they can be represented exactly in binary, such as 45.0 and 7).

When I use the Decimal function in two different ways, it gives a slightly different value. Also, one value has more precision than the other. Can someone please specify the reason for this?

Due to the same problem above: Decimal(45.0)/Decimal(7) will first convert the literals 45.0 and 7 to decimals (which can be done exactly), then perform the division using the extended precision decimal library.

Decimal(45.0/7) on the other hand converts both 45.0 and 7 to floating point numbers (which is exact), but then performs the division at floating point precision (which is only accurate to about 16 significant digits). This result is then converted to decimal (including the erroneous extra digits).

Upvotes: 1

Related Questions