Reputation: 1456
The strange result is
In [46]: Decimal(1.1).quantize(Decimal('.1'), rounding=ROUND_UP)
Out[46]: Decimal('1.2')
In [47]: Decimal(1.1).quantize(Decimal('.1'), rounding=ROUND_HALF_UP)
Out[47]: Decimal('1.1')
In [48]: Decimal(3.65).quantize(Decimal('.1'), rounding=ROUND_UP)
Out[48]: Decimal('3.7')
In [49]: Decimal(3.65).quantize(Decimal('.1'), rounding=ROUND_HALF_UP)
Out[49]: Decimal('3.6')
But I want this:
In [47]: Decimal(1.1).quantize(Decimal('.1'), rounding=Somthing)
Out[47]: Decimal('1.1')
In [48]: Decimal(3.65).quantize(Decimal('.1'), rounding=Somthing)
Out[48]: Decimal('3.7')
Which mean I want to get closest float.
Upvotes: 1
Views: 41
Reputation: 164693
The problem is you are feeding float
values to Decimal
, which is subject to floating point precision errors. Feeding a string removes the problem:
print(Decimal(str(1.1)).quantize(Decimal('.1'), rounding=ROUND_UP))
Decimal('1.1')
print(Decimal(str(3.65)).quantize(Decimal('.1'), rounding=ROUND_UP))
Decimal('3.7')
Specifically, we find:
Decimal(1.1) == Decimal('1.100000000000000088817841970012523233890533447265625')
Upvotes: 2