Reputation: 9149
I have a least squares minimization problem that has the following form
Where the parameters I want to optimize over are x
and everything else is known.
scipy.optimize.least_squares has the following form:
scipy.optimize.least_squares(fun, x0)
where x0
is an initial condition and fun
is a "Function which computes the vector of residuals"
After reading the documentation, I'm a little confused about what fun
wants me to return.
If I do the summation inside fun
, then I'm afraid that it would compute RHS, which is not equivalent to the LHS (...or is it, when it comes to minimization?)
Thanks for any assistance!
Upvotes: 2
Views: 1194
Reputation: 556
According to the documentation of scipy.optimize.least_squares
, the argument fun
is to provide the vector of residuals with which the process of minimization proceeds. It is possible to supply a scalar that is the result of summation of squared residuals, but it is also possible to supply a one-dimensional vector of shape (m
,), where m
is the number of dimensions of the residual function. Note that squaring and summation is not done in this instance as least_squares
handles that detail on its own. Only the residuals as such must be supplied in this instance.
Upvotes: 2