user7796883
user7796883

Reputation:

How the least square method work with a given function

I have some data and I have used below numpy function to make the fitting. The fitting was fine but I need some explanation how it works. What does p[1],p[2],p[0] stands for.It will be nice if I get the mathematical expression for that. what actually the least square is doing?

fitfuncvx = lambda p, x: p[2]+p[0]*np.exp(-x/p[1])
errfuncvx = lambda p, x, y: y - fitfuncvx(p, x)
sig_fit=np.where(Sig<12)
pinit = [1, 100,0.1]
pfinal, success = optimize.leastsq(errfuncvx, pinit[:], args=(Sig[sig_fit], vx[sig_fit]))

Upvotes: 1

Views: 292

Answers (2)

pylang
pylang

Reputation: 44585

What are p[0], p[1], p[2]?

The scipy.optimize functions typically return an array of parameters p. For example, given a linear equation:

enter image description here

p includes the intercept and successive coefficients (or weights) of a linear equation:

enter image description here

Thus in the latter example, p[0] and p[1] pertain to an intercept and slope of a line respectively. Of course more parameters (...) can be optimized as well for higher order polynomials. The OP uses an exponential function, where the parameters can be rewritten as follows:

def fitfuncvx(p, x):
    b0, b1, b2 = p
    return b2 + b0*np.exp(-x/b1)

We see the parameters in p are explicitly unpacked into separate weights b0, b1, b2, which directly correspond with p[0], p[1], p[2] respectively.


Details: How optimizers work?

The first returned value of the scipy.optimize.leastsq function is an array of optimized fitted parameters, started from your initial guess, that is computed by iteratively minimizing residuals. The residual is the distance between the predicted response (or y-hat value) and the true response (y). The second returned value is a covariance matrix, from which one can estimate the error in the calculation.

For reference, I include the first three arguments of the leastsq signature:

scipy.optimize.leastsq(func, x0, args=(), ...)
  • func is the objective function you wish to optimize
  • x0 is the initial guessed parameter
  • args are additional variables required by the objective function (if any)

Upvotes: 1

duffymo
duffymo

Reputation: 308998

The values p[0], p[1], and p[2] are coefficients being solved for in the least squares fit.

The least squares fit is calculating the values of the coefficients that minimize the sum of squared errors between the dependent variable data values and those predicted by the fitted function.

It's probably using a conjugate gradient iterative method to calculate the coefficients given a starting guess.

I don't think you should have a p[2].

Your fit function should be:

y = c0*exp(-c1*x)

If I take the natural logarithm of both sides:

ln(y) = ln(c0) - c1*x = z

if you do that transformation you're doing a simple linear regression on a new function z = z(x).

That's an easy problem. There are formulas for the coefficients for the case of one independent variable.

Solve for the coefficients using your transformed data and substitute back into the original equation.

Upvotes: 0

Related Questions