Reputation: 123
from decimal import *
getcontext().prec = 8
print(getcontext(),"\n")
x_amount = Decimal(0.025)
y_amount = Decimal(0.005)
test3 = x_amount - y_amount
print("test3",test3)
Output:
Context(prec=8, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero, Overflow]) test3 0.020000000
Why does this return value of 'test3' up to 9 decimal places if the precision is set to 8 according to example mentioned here?
And it changes to 3 decimal places if I replace Line 6 and 7 in the above code with:
x_amount = Decimal('0.025')
y_amount = Decimal('0.005')
I am using decimals in a financial application and find it very confusing about conversions, operations, definition, precision etc. Is there a link which I can refer to know the details of using decimals in python?
Upvotes: 1
Views: 1841
Reputation: 155516
Your result is correct. prec
says how many digits to keep starting from the most significant non-zero digit. So in your result:
test3 0.020000000
^^^^^^^^
the digits pointed to by carets are the expected eight digits covered by the precision. The reason you get all of them is that using float
to initialize Decimal
is always wrong. Never initialize a Decimal
from a float
; it just reproduces the inaccuracy of float
in the Decimal
(try print
ing x_value
and y_value
to see the garbage). Passing a str
is the only safe way to do this.
When you do this with str
, the individual Decimal
s "know" the last digit of meaningful precision they possess, and it only goes to the third decimal place, so the result doesn't include precision beyond that point. If you want prec
capping to kick in, try initializing one of the arguments with more precision than prec
, and the result will be rounded to prec
.
Upvotes: 2