PAM
PAM

Reputation: 101

Dealing with SciPy fmin_bfgs precision loss

I'm currently trying to solve numerically a minimization problem and I tried to use the optimization library available in SciPy.

My function and derivative are a bit too complicated to be presented here, but they are based on the following functions, the minimization of which do not work either:

def func(x):
    return np.log(1 + np.abs(x))
def grad(x):
    return np.sign(x) / (1.0 + np.abs(x))

When calling the fmin_bfgs function (and initializing the descent method to x=10), I get the following message:

Warning: Desired error not necessarily achieved due to precision loss.
     Current function value: 2.397895
     Iterations: 0
     Function evaluations: 24
     Gradient evaluations: 22

and the output is equal to 10 (i.e. initial point). I suppose that this error may be caused by two problems:

Are my suppositions true? Or does the problem come from anything else? Whatever the error can be, what can I do to correct this? In particular, is there any other available minimization method that I could use?

Thanks in advance.

Upvotes: 0

Views: 1111

Answers (1)

Erwin Kalvelagen
Erwin Kalvelagen

Reputation: 16724

abs(x) is always somewhat dangerous as it is non-differentiable. Most solvers expect problems to be smooth. Note that we can drop the log from your objective function and then drop the 1, so we are left with minimizing abs(x). Often this can be done better by the following.

Instead of min abs(x) use

min t
-t <= x <= t

Of course this requires a solver that can solve (linearly) constrained NLPs.

Upvotes: 1

Related Questions