Reputation: 21717
My function:
count = 0
def fake(x):
global count
print count
count += 1
return x ** 4 + 10 * x ** 3 + 4 * x ** 2 + 7 * x + 1
‘Nelder-Mead’ method, gives me correct number of function calls.
scipy.optimize.fmin(fake, [1])
0
1
...
45
Optimization terminated successfully.
Current function value: -887.470826
Iterations: 23
Function evaluations: 46
Out[377]:
array([-7.25761719])
BFGS method, gives me correct number of function calls.
scipy.optimize.fmin_bfgs(fake, [1])
0
1
...
61
62
Optimization terminated successfully.
Current function value: -887.470826
Iterations: 6
Function evaluations: 63
Gradient evaluations: 21
Out[380]:
array([-7.25765231])
However, L-BFGS-B, gives me strange number of function calls. What happened?
scipy.optimize.fmin_l_bfgs_b(fake, [1], approx_grad=True)
0
1
...
43
44
Out[374]:
(array([-7.25765246]),
array([-887.47082639]),
{'funcalls': 15,
'grad': array([ -3.41060513e-05]),
'nit': 6,
'task': 'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH',
'warnflag': 0})
Upvotes: 3
Views: 583
Reputation: 19760
From the code, it appears that the number of function evaluations is counted when the function and gradient is evaluated, but does not count the number of function calls needed to approximate the gradient.
Inside my version of lbfgsb.py
:
197 n_function_evals += 1
198 # Overwrite f and g:
199 f, g = func_and_grad(x)
I recommend that you report this as a bug.
Edit: As per the comment below, this was actually a bug and was fixed.
Upvotes: 3