Rossignol_fr
Rossignol_fr

Reputation: 13

Why some very close floating numbers cause such divergence in this Python code with 3.4.x to at least 3.6.6 version?

When I execute this code it's generate arithmetical divergence or not with very close float numbers it's occur when the numbers are to the form 2**n-p/q can produce a acceptable result and sometimes a very fast divergence. I have read some documentation about floating point arithmetic but I think the matter is elsewhere but where ? If anyone has an idea I'll be very happy to understand the matter...

I have tried to execute the code on Python 3.4.5 (32bits) and I have try it online with repl.it and trinket here url[https://trinket.io/python3/d3f3655168] the results were similars .

#this code illustrates arithmetical divergence with floating point numbers
# on Python 3.4 an 3.6.6

def ErrL(r):
    s=1
    L=[]
    for k in range(10):
        s=s*(r+1)-r
        L.append(s)
    return L

print(ErrL(2**11-2/3.0)) # this number generate a fast divergence in loop for
#[0.9999999999997726, 0.9999999995341113, 0.9999990457047261, 0.9980452851802966, -3.003907522359441, -8200.33724163292, -16799071.44994476, -34410100067.30351, -70483354973240.67, -1.4437340543685667e+17]

print(ErrL(2**12-1/3.0)) # this number generate a fast divergence in loop for
#[0.9999999999995453, 0.9999999981369001, 0.9999923674999991, 0.968732191662184, -127.09378815725313, -524756.5521508802, -2149756770.9781055, -8806836909202.637, -3.607867520470422e+16, -1.4780230608860496e+20]

print(ErrL(2**12-1/10.0)) # this number generate a fast divergence in loop for
#[0.9999999999995453, 0.9999999981369001, 0.9999923670652606, 0.9687286296662023, -127.11567712053602, -524876.117595124, -2150369062.0754633, -8809847014512.865, -3.609306223376185e+16, -1.478696666654989e+20]

print(ErrL(2**12-1/9.0)) # no problem here
#[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

print(ErrL(2**12-1/11.0)) # no problem here
#[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

What I expect is obviously an ten ones vector !

Upvotes: 1

Views: 177

Answers (2)

Serge Ballesta
Serge Ballesta

Reputation: 149165

The reason why the divergence is very fast is mathematics. If you write f(s) = (r+1) * s - r, it becomes evident that the derivative is the constant r+1. According to wikipedia 64 bits floats in IEE 754 representation have a mantissa of 53 bits. With r being close to 2**11, an error on last bit will require less than 5 steps be become close to 1. And with numbers close to 2**12 it explodes at the 5th iteration, which is what you obtain.

Phew, the maths I learned 40 years ago are still not broken...

Upvotes: 1

Thierry Lathuille
Thierry Lathuille

Reputation: 24289

When executing this code with Python 2, / between integers means integer division (which is now called // in Python 3).

So, in this case, 2/3, 1/3 and so on are all equal to 0, and what you get is ErrL(2**11), ..., which will always be 1.

With Python 3, 2/3 is a float, and not 0, which explains why you get different results.

Upvotes: 2

Related Questions