Mathieu
Mathieu

Reputation: 305

Get covariance best-fit parameters obtained by lmfit using non-"Leastsq"methods

I have some observational data and I want to fit some model parameters by using lmfit.Minimizer() to minimize an error function which, for reasons I won't get into here, must return a float instead of an array of residuals. This means that I cannot use the Leastsq method to minimize the function. In practice, methods nelder, BFGS and powell converge fine, but these methods do not provide the covariance of the best-fit parameters (MinimizerResult.covar).

I would like to know if thee is a simple way to compute this covariance when using any of the non-Leastsq methods.

Upvotes: 0

Views: 689

Answers (1)

M Newville
M Newville

Reputation: 7862

It is true that the leastsq method is the only method that can calculate error bars and that this requires a residual array (with more elements than variables!).

It turns out that some work has been done in lmfit toward the goal of being able to compute uncertainties for scalar minimizers, but it is not complete. See https://github.com/lmfit/lmfit-py/issues/169 and https://github.com/lmfit/lmfit-py/pull/481. If you're interested in helping, that would be great!

But, yes, you could compute the covariance by hand. For each variable, you would need to make a small perturbation to its value (ideally around 1-sigma, but since that is what you're trying to calculate, you probably don't know it) and then fix that value and optimize all the other values. In this way you can compute the Jacobian matrix (derivative of the residual array with respect to the variables).

From the Jacobian matrix, the covariance matrix is (assuming there are no singularities):

covar = numpy.inv(numpy.dot(numpy.transpose(jacobian), jacobian))

Upvotes: 0

Related Questions