mbrivio
mbrivio

Reputation: 39

Problem with roots of a non-linear equation

I have a hyperbolic function and i need to find the 0 of it. I have tried various classical methods (bisection, newton and so on).

Second derivatives are continuous but not accessible analytically, so i have to exclude methods using them.

For the purpose of my application Newton method is the only one providing sufficient speed but it's relatively unstable if I'm not close enough to the actual zero. Here is a simple screenshot:

enter image description here

The zero is somewhere around 0.05. and since the function diverges at 0, if i take a initial guess value greater then the minimum location of a certain extent, then i obviously have problems with the asymptote.

Is there a more stable method in this case that would eventually offer speeds comparable to Newton?

I also thought of transforming the function in an equivalent better function with the same zero and only then applying Newton but I don't really know which transformations I can do.

Any help would be appreciated.

Upvotes: 0

Views: 152

Answers (3)

Lutz Lehmann
Lutz Lehmann

Reputation: 25992

Dekker's or Brent's method should be almost as fast as Newton. If you want something simple to implement yourself, the Illinois variant of the regula-falsi method is also reasonably fast. These are all bracketing methods, so should not leave the domain if the initial interval is inside the domain.

def illinois(f,a,b,tol=1e-8):
    '''regula falsi resp. false postion method with
        the Illinois anti-stalling variation'''
    fa = f(a)
    fb = f(b)
    if abs(fa)<abs(fb): a,fa,b,fb = b,fb,a,fa
    while(np.abs(b-a)>tol):
        c = (a*fb-b*fa)/(fb-fa)
        fc = f(c)
        if fa*fc < 0:
            fa *= 0.5
        else:
            a, fa = b, fb
        b, fb = c, fc
    return b, fb

Upvotes: 3

Dr. V
Dr. V

Reputation: 1914

For your case, @sams-studio's answer might work, and I would try that first. In a similar situation - also in multi-variate context - I used Newton-homotopy methods.

Basically, you limit the Newton step until the absolute value of y is descending. The cheapest way to implement is that you half the Newton step if y increases from the last step. After a few steps, you're back at Newton with full second order convergence.

Disclamer: If you can bound your solution (you know a maximal x), the answer from @Lutz Lehmann would also be my first choice.

Upvotes: 1

sams-studio
sams-studio

Reputation: 733

How about using log(x) instead of x?

Upvotes: 2

Related Questions