archit
archit

Reputation: 181

Precision loss when multiplying Decimal in Python

When trying to multiply two decimal objects in python I am experiencing precision loss. How can I fix this?

    u = Decimal("1.4142135623730950488016887242096980785696718753769480731766797379907324784621070388503875343276415727350138462309122970249248360558507372126441214970999358314132")
   y = Decimal(10**100)
   z = u*y
   str(int(z))
=> '14142135623730950488016887240000000000000000000000000000000000000000000000000000000000000000000000000'

Upvotes: 1

Views: 1408

Answers (2)

Bert Peters
Bert Peters

Reputation: 1542

Decimal is not arbitrary precision like int is, but rather has a fixed but changeable number of decimals of precision. This is 28 by default, which happens to be where your number cuts off at the moment.

You can change this by doing decimal.getcontext().prec=128 or to whatever precision you need. For more information, I refer you to the documentation.

Upvotes: 7

sg.sysel
sg.sysel

Reputation: 163

"Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem"

You could change that by calling decimal.getcontext().prec = 256(or any other value fitting your purpose)

Upvotes: 1

Related Questions