Reputation: 18181
How do I get the decimal precision of a floating point number in Swift? All the existing answers use some form of string conversion, which is inefficient. I'm specifically looking for a method that does not involve string conversion.
This question was based on a misconception. I have realized and corrected my mistake in understanding, thanks to @MartinR and @SimonByrne.
Upvotes: 0
Views: 1527
Reputation: 18181
This is how to do it:
func getPrecision(valueFunction: (@autoclosure () -> Double)) -> Int
{
let value = valueFunction()
var tens: Double = 1.0
var precision: Int = 0
while (floor((value * tens) + 0.1) / tens) != value
{
tens *= 10
precision++
}
return precision
}
Edit: This answer is wrong, as correctly pointed out by Simon Byrne.
Upvotes: 0
Reputation: 7864
As other commenters have pointed out, there isn't really any such thing as "decimal precision" of a floating point number: when you write something like x = 0.123
, you are really setting x
to be the closest double-precision floating point number to 0.123, which is in fact:
0.1229999999999999982236431605997495353221893310546875
Based on your proposed answer, your question would be more precisely stated as computing the minimum number of decimal places in a decimal approximation to the floating point number.
For this your code should be correct for "reasonable" values, though if you have more than 22 decimal places you might see some errors, as the tens
variable will no longer be exact (1023 cannot be exactly represent by a double precision float). For more details on this you should read up on binary to decimal conversion: I recommend taking a look at Rick Regan's webpage:
http://www.exploringbinary.com/tag/convert-to-decimal/
Upvotes: 2