Majoris
Majoris

Reputation: 3189

Square Root inaccurate in decimals python

Why does decimal points show inaccurate square root? It must be all zeros perfectly.

print('Enter your input:')
n = input()

def squareroot(n):
        i = 0.01
        while i*i < float(n):
                i += 0.01
        return i

print (squareroot(n))

output -

Enter your input:
100
10.009999999999831

Enter your input:
25
5.009999999999938

Enter your input:
9
3.00999999999998

Upvotes: 0

Views: 129

Answers (2)

blhsing
blhsing

Reputation: 106553

This is because the values of binary floating point numbers are approximated when it is not a power of 2.

0.1 represented in the standard binary64 format is actually exactly 0.1000000000000000055511151231257827021181583404541015625 in decimal. See Is floating point math broken? for detailed explanations.

To avoid the inherit inaccuracies of binary floating point numbers you can either use the decimal module, or in your case, simply increment i by a number that's a power of 2 instead, such as 0.0078125, which is 2 ** -7.

def squareroot(n):
    i = 0.0078125
    while i * i < float(n):
        i += 0.0078125
    return i

Sample input/output after this change:

Enter your input:
25
5.0

Upvotes: 1

wrvb
wrvb

Reputation: 251

Your algorithm amounts to "repeatedly increase i by 0.01 until i*i is bigger than n". If x is the true square root, this will return a number between x and x+.01, which is exactly what you're seeing. If you want a more accurate answer, pick a smaller increment or (better yet) use a better algorithm, like binary search or Newton's method. Or, if this isn't a homework assignment, just use math.sqrt.

Upvotes: 1

Related Questions