Reputation: 21230
What I'm trying to accomplish is writing a function that returns a Decimal
representation of an input, if it can be converted to a Decimal and if it has any level of precision up to but not exceeding a particular digit.
Suppose I have the following:
def check(input):
d = Decimal(input).quantize(Decimal('0.01'))
raw = Decimal(input)
return d if d.compare(raw) == 0 else "FAIL"
print(check(7.123)) # Returns 'FAIL'
print(check('7.123')) # Returns 'FAIL'
print(check(7.12)) # Returns 'FAIL'
print(check('7.12')) # Returns Decimal('7.12')
As you can see, if a string is passed in, it fails if there is any truncation of the value, and returns the Decimal
representation if there is no truncation. On the other hand, floats always fail.
Is there a way to fix this check to see if a quantized Decimal
value was truncated? Or is there simply some more direct way that I'm missing?
Upvotes: 0
Views: 138
Reputation: 90899
The issue occurs because when you convert a float
instance into Decimal
, it converts the internal representation of the float , Example -
>>> Decimal(7.12)
Decimal('7.12000000000000010658141036401502788066864013671875')
From documentation -
If value is a float, the binary floating point value is losslessly converted to its exact decimal equivalent. This conversion can often require 53 or more digits of precision. For example,
Decimal(float('1.1'))
converts toDecimal('1.100000000000000088817841970012523233890533447265625')
.
Once way to fix this , would be to convert the float
to str
, before passing it into Decimal
. Example -
>>> Decimal(str(7.12))
Decimal('7.12')
Example/Demo -
>>> def check(input):
... d = Decimal(str(input)).quantize(Decimal('0.01'))
... raw = Decimal(str(input))
... return d if d.compare(raw) == 0 else "FAIL"
...
>>> print(check(7.123))
FAIL
>>> print(check('7.123'))
FAIL
>>> print(check(7.12))
7.12
>>> print(check('7.12'))
7.12
Upvotes: 2