Kozyarchuk
Kozyarchuk

Reputation: 21877

Python float to Decimal conversion

Python Decimal doesn't support being constructed from float; it expects that you have to convert float to a string first.

This is very inconvenient since standard string formatters for float require that you specify number of decimal places rather than significant places. So if you have a number that could have as many as 15 decimal places you need to format as Decimal("%.15f" % my_float), which will give you garbage at the 15th decimal place if you also have any significant digits before decimal (Decimal("%.15f" % 100000.3) == Decimal('100000.300000000002910')).

Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered, perhaps limiting number of significant digits that can be supported?

Upvotes: 99

Views: 169242

Answers (12)

Ryabchenko Alexander
Ryabchenko Alexander

Reputation: 12470

you can convert and than quantize to keep 5 digits after comma via

Decimal(1.89977787898).quantize(Decimal("1.00000"))

Upvotes: 9

ChristophK
ChristophK

Reputation: 793

I've come across the the same problem / question today and I'm not completely satisfied with any of the answers given so far. The core of the question seems to be:

Can someone suggest a good way to convert from float to Decimal [...] perhaps limiting number of significant digits that can be supported?

Short answer / solution: Yes.

def ftod(val, prec = 15):
    return Decimal(val).quantize(Decimal(10)**-prec)

Long Answer:

As nosklo pointed out it is not possible to preserve the input of the user after it has been converted to float. It is possible though to round that value with a reasonable precision and convert it into Decimal.

In my case I only need 2 to 4 digits after the separator, but they need to be accurate. Let's consider the classic 0.1 + 0.2 == 0.3 check.

>>> 0.1 + 0.2 == 0.3
False

Now let's do this with conversion to decimal (complete example):

>>> from decimal import Decimal
>>> def ftod(val, prec = 15):   # float to Decimal
...     return Decimal(val).quantize(Decimal(10)**-prec)
... 
>>> ftod(0.1) + ftod(0.2) == ftod(0.3)
True

The answer by Ryabchenko Alexander was really helpful for me. It only lacks a way to dynamically set the precision – a feature I want (and maybe also need). The Decimal documentation FAQ gives an example on how to construct the required argument string for quantize():

>>> Decimal(10)**-4
Decimal('0.0001')

Here's how the numbers look like printed with 18 digits after the separator (coming from C programming I like the fancy python expressions):

>>> for x in [0.1, 0.2, 0.3, ftod(0.1), ftod(0.2), ftod(0.3)]:
...     print("{:8} {:.18f}".format(type(x).__name__+":", x))
... 
float:   0.100000000000000006
float:   0.200000000000000011
float:   0.299999999999999989
Decimal: 0.100000000000000000
Decimal: 0.200000000000000000
Decimal: 0.300000000000000000

And last I want to know for which precision the comparision still works:

>>> for p in [15, 16, 17]:
...     print("Rounding precision: {}. Check  0.1 + 0.2 == 0.3  is {}".format(p,
...         ftod(0.1, p) + ftod(0.2, p) == ftod(0.3, p)))
... 
Rounding precision: 15. Check  0.1 + 0.2 == 0.3  is True
Rounding precision: 16. Check  0.1 + 0.2 == 0.3  is True
Rounding precision: 17. Check  0.1 + 0.2 == 0.3  is False

15 seems to be a good default for maximum precision. That should work on most systems. If you need more info, try:

>>> import sys
>>> sys.float_info
sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)

With float having 53 bits mantissa on my system, I calculated the number of decimal digits:

>>> import math
>>> math.log10(2**53)
15.954589770191003

Which tells me with 53 bits we get almost 16 digits. So 15 ist fine for the precision value and should always work. 16 is error-prone and 17 definitly causes trouble (as seen above).

Anyway ... in my specific case I only need 2 to 4 digits of precision, but as a perfectionist I enjoyed investigating this :-)

Any suggestions / improvements / complaints are welcome.

Upvotes: 7

mmj
mmj

Reputation: 5780

Inspired by this answer I found a workaround that allows to shorten the construction of a Decimal from a float bypassing (only apparently) the string step:

import decimal
class DecimalBuilder(float):
    def __or__(self, a):
         return decimal.Decimal(str(a))

>>> d = DecimalBuilder()
>>> x = d|0.1
>>> y = d|0.2
>>> x + y # works as desired
Decimal('0.3')
>>> d|0.1 + d|0.2 # does not work as desired, needs parenthesis
TypeError: unsupported operand type(s) for |: 'decimal.Decimal' and 'float'
>>> (d|0.1) + (d|0.2) # works as desired
Decimal('0.3')

It's a workaround but it surely allows savings in code typing and it's very readable.

Upvotes: -1

Chris
Chris

Reputation: 6392

The main answer is slightly misleading. The g format ignores any leading zeroes after the decimal point, so format(0.012345, ".2g") returns 0.012 - three decimal places. If you need a hard limit on the number of decimal places, use the f formatter: format(0.012345, ".2f") == 0.01

Upvotes: 2

Deep Patel
Deep Patel

Reputation: 739

You can use JSON to accomplish it

import json
from decimal import Decimal

float_value = 123456.2365
decimal_value = json.loads(json.dumps(float_value), parse_float=Decimal)

Upvotes: 1

jfs
jfs

Reputation: 414905

Python <2.7

"%.15g" % f

Or in Python 3.0:

format(f, ".15g")

Python 2.7+, 3.2+

Just pass the float to Decimal constructor directly, like this:

from decimal import Decimal
Decimal(f)

Upvotes: 85

vincent wen
vincent wen

Reputation: 483

I suggest this

>>> a = 2.111111
>>> a
2.1111110000000002
>>> str(a)
'2.111111'
>>> decimal.Decimal(str(a))
Decimal('2.111111')

Upvotes: 40

muhuk
muhuk

Reputation: 16095

Python does support Decimal creation from a float. You just cast it as a string first. But the precision loss doesn't occur with string conversion. The float you are converting doesn't have that kind of precision in the first place. (Otherwise you wouldn't need Decimal)

I think the confusion here is that we can create float literals in decimal format, but as soon as the interpreter consumes that literal the inner representation becomes a floating point number.

Upvotes: 5

Doug Currie
Doug Currie

Reputation: 41220

The "right" way to do this was documented in 1990 by Steele and White's and Clinger's PLDI 1990 papers.

You might also look at this SO discussion about Python Decimal, including my suggestion to try using something like frap to rationalize a float.

Upvotes: 1

Paul Fisher
Paul Fisher

Reputation: 9666

When you say "preserving value as the user has entered", why not just store the user-entered value as a string, and pass that to the Decimal constructor?

Upvotes: 2

nosklo
nosklo

Reputation: 223172

You said in your question:

Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered

But every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost.

Upvotes: 33

Federico A. Ramponi
Federico A. Ramponi

Reputation: 47105

The "official" string representation of a float is given by the repr() built-in:

>>> repr(1.5)
'1.5'
>>> repr(12345.678901234567890123456789)
'12345.678901234567'

You can use repr() instead of a formatted string, the result won't contain any unnecessary garbage.

Upvotes: 5

Related Questions