Camille
Camille

Reputation: 79

basic example with fmin_bfgs from scipy.optimize (python) does not work

I am learning the optimization functions in scipy. I want to use the BFGS algorithm where the gradient of a function can be provided. As a basic example I want to minimize the following function: f(x) = x^T A x , where x is a vector.

When I implement this in python (see implementation below), I get the following error:

message: 'Desired error not necessarily achieved due to precision loss.'

Tracing the error back to the source led me to the function of scipy that performs a line search to determine the step length, but I have no clue why it fails on such a simple example.

My code to implement this is as follows:

# coding: utf-8
from scipy import optimize
import numpy as np

# Matrix to be used in the function definitions
A = np.array([[1.,2.],[2.,3.]])

# Objectve function to be minimized: f = x^T A x
def f(x):
    return np.dot(x.T,np.dot(A,x))

# gradient of the objective function, df = 2*A*x
def fp(x):
    return 2*np.dot(A,x)

# Initial value of x
x0 = np.array([1.,2.])

# Try with BFGS
xopt = optimize.minimize(f,x0,method='bfgs',jac=fp,options={'disp':1})

Upvotes: 1

Views: 4906

Answers (1)

cel
cel

Reputation: 31339

The problem here is that you are looking for a minimum, but the value of your target function f(x) is not bounded in the negative direction.

At first sight, your problem looks like a basic example of a convex target function, but if you take a closer look, you will realize it is not.

For convexity A has to be positive (semi-)definite. This condition is violated in your case. (Just calculate the determinant of A and you will see that immediately).

If you instead pick A = np.array([[2.,2.],[2.,3.]]), everything will be fine again.

Upvotes: 2

Related Questions