Reputation: 5418
I just found a behavior which I cannot explain. Do I miss something ?
I have an implicit function:
def my_cost_fun(x,a,b,c):
# x is a scalar; all variables are provided as numpy arrays
F = some_fun(x,a,b,c) - x
return F
I do minimize the function using:
optimize.fsolve(my_cost_fun,0.2,args = (a,b,c))
optimize.brentq(my_cost_fun,-0.2,0.2,args = (a,b,c))
Or by the mimimize function:
optimize.minimize(my_cost_fun,0.2,args = (a,b,c),method = 'L-BFGS-B',bounds=((0,a),)
The strange thing is:
If I use return F
%timeit
measures the fastest loop with ~ 250 µsx0
at all and provides a wrong resultIf I use return F**2
:
fsolve returns the right solution but converges slowly; 1.2 ms for the fastest loop
L-BFGS-B returns the right solution but converges slowly: 1.5 ms for the fastest loop
Can someone explain why ?
Upvotes: 1
Views: 1690
Reputation: 4824
As I mentioned in the comments:
Here is one possible explaination of why L-BFGS-B is not working when you use return F
: If the value of F
can be negative, then optmize.minimize will try to find the most negative value it can. minimize
isn't necessarily finding a root, it's finding the minimum. If you return F**2
instead, since for real-valued functions F**2
will always be positive, minima of F**2
will happen at F=0, i.e. the minima will be the roots.
This doesn't explain your timing issue, but that may be of secondary concern. I would still be curious to study the timing with your particular some_fun(x,a,b,c)
if you get a chance to post a definition.
Upvotes: 1