Reputation: 452
I am interacting with an API that returns floats. I am trying to calculate the number of decimal places with which the API created these floats.
For example:
# API returns the following floats.
>> 0.0194360600000000015297185740.....
>> 0.0193793800000000016048318230.....
>> 0.0193793699999999999294963970.....
# Quite clearly these are supposed to represent:
>> 0.01943606
>> 0.01937938
>> 0.01937937
# And are therefore ACTUALLY accurate to only 8 decimal places.
How can I identify that the floats are actually accurate to 8 decimal places? Once I do that, I can initialize a decimal.Decimal
instance with the "true" values rather than the inaccurate floats.
Edit: The number of accurate decimal places returned by the API varies and is not always 8!
Upvotes: 2
Views: 190
Reputation: 5039
If you are using Python 2.7 or Python 3.1+, consider using the repr()
builtin.
Here's how it works with your examples in a Python 3.6 interpreter.
>>> repr(0.0194360600000000015297185740)
'0.01943606'
>>> repr(0.0193793800000000016048318230)
'0.01937938'
>>> repr(0.0193793699999999999294963970)
'0.01937937'
This works because repr()
shows the minimum precision of the number, n
, that still satisfies float(repr(n)) == n
.
Given the string representation returned by repr()
, you can count the number of digits to the right of the decimal point.
Upvotes: 6