Eric Weine
Eric Weine

Reputation: 53

Preconditioning conjugate gradient solvers in scipy.optimize.minimize

I'm solving a large unconstrained optimization problem and experimenting with the trust-kyrlov / trust-ncg methods in scipy.optimize.minimize. Unfortunately, these methods can be quite slow when my problem is poorly conditioned. To mitigate this, I am hoping to do the following:

(1) Start with a "warmup" phase where I optimize some fixed number of iterations with LBFGS.

(2) Use the LbfgsInvHessProduct Linear Operator as a pre-conditioner for the trust region method and solve to convergence.

Unfortunately, I do not see a simple way to pass the required preconditioner in step 2 to scipy.optimize.minimize. Is there any way to do this? Or, is there a simple workaround where I could define a custom method with this capability?

Thanks!

Upvotes: 0

Views: 54

Answers (0)

Related Questions