LorenzoLMP
LorenzoLMP

Reputation: 71

find the maximum of a function with bounds in Python

I would like to find a local maximum of a function f(x) where x can range between two fixed values since f(x) would tend to +inf if x tends to +inf. I've been trying to use such algorithms as scipy.optimize.fmin_l_bfgs_band scipy.optimize.fmin_tnc (from scipy.ref guide) but I can't figure out how to correctly set the bounds. (I know, it must be something stupid but I'm quite a noob with Python). Let's give an easy example:

>>>import scipy.optimize as opt  
>>>import scipy  
>>>from numpy import *  

>>>def f (x): return x**(1/2.0)  
>>>max_x = opt.fmin_l_bfgs_b(lambda x: -f(x), [0,0], bounds=([0,0],[9,0])) #I want x to range between 0 and 9 and fmax be 3

The output is pretty strange, though: I get nothing at all! Not even an error! What am I missing?

Upvotes: 2

Views: 11292

Answers (2)

jcrudy
jcrudy

Reputation: 4061

The bounds argument goes [(lower1,upper1),(lower2,upper2)], not [(lower1,lower2),(upper1,upper2)]. If you look at your result (max_x) you will see "ERROR: NO FEASIBLE SOLUTION", which I am guessing is because your bounds specify an empty set.

Here is a correct way to call the function. I assume the square root is just an example. I used -x**2 instead.

import scipy.optimize as opt
import scipy
from numpy import *
def f(x):
    print x
    return -x**(2)

max_x = opt.fmin_l_bfgs_b(lambda x: -f(x), 1.0, bounds=[(-9,9)],approx_grad=True)

Because you are not specifying a gradient function, you need to set approx_grad=True. The 1.0 is my initial guess for the maximum (although it is obviously zero for this example). I added a print statement so I can see each time the function is called, but that's normally not necessary. For more details on different ways to call fmin_l_bfgs_b, see here.

The above code results in:

[ 1.]
[ 1.]
[ 1.00000001]
[-0.99999999]
[-0.99999999]
[-0.99999998]
[ 0.001]
[ 0.001]
[ 0.00100001]
[ -5.01108742e-09]
[ -5.01108742e-09]
[  4.98891258e-09]

And max_x looks like this:

(array([ -5.01108742e-09]),
 array([  2.51109971e-17]),
 {'funcalls': 4,
  'grad': array([ -2.21748344e-11]),
  'task': 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL',
  'warnflag': 0})

Upvotes: 4

kirelagin
kirelagin

Reputation: 13616

Why are you using multivariate minimizers? Try scipy.optimize.fminbound.

max_x = opt.fminbound(lambda x: -f(x), 0, 9)

Upvotes: 4

Related Questions