user3467349
user3467349

Reputation: 3191

Why must I resort to strings to get accurate number handling

When converting data to decimal, I sometimes still get incorrect results:

from decimal import *
D = Decimal
>>> D(5.2).quantize(D('.00000'), rounding=ROUND_DOWN)
Decimal('5.20000')
>>> D(5.3).quantize(D('.00000'), rounding=ROUND_DOWN)
Decimal('5.29999')

I don't think floating point imprecision is an excuse here since I'm using a specialized class to deal with the numbers! Quoted from the pyton docs:

Decimal “is based on a floating-point model which was designed with people in mind, and necessarily has a paramount guiding principle – computers must provide an arithmetic that works in the same way as the arithmetic that people learn at school.” – excerpt from the decimal arithmetic specification

This works:

x=round(x - .0000049,5)
D(str(x) + (5-len(str(x).split('.')[1]))*'0')

Upvotes: 1

Views: 142

Answers (2)

MariusSiuram
MariusSiuram

Reputation: 3644

Your assumptions are wrong. The literal 5.3 is invalid (as Kevin has already stated).

The problem here is that you cannot start from an invalid representation (a float one) and then assume that magic will happen and fix everything.

It is easy to build a sane environment:

a = D(53)
b = D(10)
c = a/b

And then c is a valid number.

The problem is: where does your decimal number come from? If you have 5.3 and want a valid representation, you are asking a valid representation of a number that you have as a string. Then, you need to use the string. If you have a division, then take advantage of that and do the operation in the decimal.Decimal environment. If the user has passed a number, keep it as string and then convert it to Decimal directly, etc.

Bonus edit: Remember that

$ 5.2999999999999999 == 5.3
> True

That is what means that "literal representation is wrong" (from your point of view and constraints). Using 5.3 as a float implies this problem.

Upvotes: 1

Kevin
Kevin

Reputation: 30161

Because 5.3 by itself is already wrong. When you pass it to Decimal(), that won't magically fix it. You need to be using Decimal() at every step from the very beginning. That means writing Decimal('5.3') instead of Decimal(5.3).

EDIT: OP has stated they are using JSON. Here's how you parse Decimals from JSON:

import json, decimal
decimal_decoder = json.JSONDecoder(parse_float=decimal.Decimal)
parsed_json = decimal_decoder.decode(raw_json)

(where raw_json is the JSON as a string). This will produce correct Decimals.

Upvotes: 10

Related Questions