coffeedealer
coffeedealer

Reputation: 33

Optimization of fitting parameters of two coupled equations by using least-square method scipy

I want to fit these 4 parameters:

K0 = np.array([K11, K12, K21, K22])

In from the coupled equations f(x, parameters) and g(x, parameters)

enter image description here

We could think those equations are normalized and have a sigmoidal line shape hysteresis-like shape where the upper arm curve is described by g and the lower arm described by f.

The idea is getting the fitted K0 parameters together with the error calculated like here in the `bootstrap` section pasted here:

errfunc = lambda p, x, y: function(x,p) - y

# Fit first time
pfit, perr = optimize.leastsq(errfunc, p0, args=(datax, datay), full_output=0)


# Get the stdev of the residuals
residuals = errfunc(pfit, datax, datay)
sigma_res = np.std(residuals)

sigma_err_total = np.sqrt(sigma_res**2 + yerr_systematic**2)

# 100 random data sets are generated and fitted
ps = []
for i in range(100):

    randomDelta = np.random.normal(0., sigma_err_total, len(datay))
    randomdataY = datay + randomDelta

    randomfit, randomcov = \
        optimize.leastsq(errfunc, p0, args=(datax, randomdataY),\
                         full_output=0)

    ps.append(randomfit) 

ps = np.array(ps)
mean_pfit = np.mean(ps,0)

# You can choose the confidence interval that you want for your
# parameter estimates: 
Nsigma = 1. # 1sigma gets approximately the same as methods above
            # 1sigma corresponds to 68.3% confidence interval
            # 2sigma corresponds to 95.44% confidence interval
err_pfit = Nsigma * np.std(ps,0) 

pfit_bootstrap = mean_pfit
perr_bootstrap = err_pfit

where pfit_bootstrap are the fitted parameters and perr_boostrap are the error at 1 sigma for each parameter.

Since both are coupled I tried the approach from here.

I defined a errfunct as foo in order to minimize the difference of squares between the fit and the experimental points.

foo = np.square(Y1*exp - f).sum() + np.square(Y2*exp - g).sum()

but then when running step by step each line, in the following one:

pfit, perr = optimize.leastsq(foo, K0, args=(L, Y1exp, Y2exp))

I got the following error:

Traceback (most recent call last)
Cell In[47], line 24
     20 function = np.square(Y1exp - f).sum() + np.square(Y2exp - g).sum()  
     22 print(function)
---> 24 pfit = optimize.leastsq(foo, K0, args=(L, Y1exp, Y2exp))

File ~/anaconda3/envs/TP/lib/python3.9/site-packages/scipy/optimize/_minpack_py.py:415, in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag)
    413 if not isinstance(args, tuple):
    414     args = (args,)
--> 415 shape, dtype = _check_func('leastsq', 'func', func, x0, args, n)
    416 m = shape[0]
    418 if n > m:

File ~/anaconda3/envs/TP/lib/python3.9/site-packages/scipy/optimize/_minpack_py.py:25, in _check_func(checker, argname, thefunc, x0, args, numinputs, output_shape)
     23 def _check_func(checker, argname, thefunc, x0, args, numinputs,
     24                 output_shape=None):
---> 25     res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
     26     if (output_shape is not None) and (shape(res) != output_shape):
     27         if (output_shape[0] != 1):

TypeError: 'numpy.float64' object is not callable

Any suggestion to solve the problem and get the fitted K parameters fitted together with their associated error using this method?


Following the suggestion from @tharen below, I would define f and g functions as:

x=L

def f (L, K11, K12): 
     return (K11*L + 2*K11*K12*L*L) / (2*(1 + K11*L + K11*K12*L*L)) 
def g (L, K21, K22): 
     return (K21*L + 2*K21*K22*L*L) / (2*(1 + K21*L + K21*K22*L*L)) 

How would it follow next to set f(*K) and g(*K) in the following equation bearing in mind that f(x, K11, K12) and g(x, K21, 22)?

def foo(K, Y1exp, Y2Exp): 
     return np.square(Y1exp - f(*K)).sum() + np.square(Y1exp - g(*K)).sum()

Reminder: K0 = np.array([K11, K12, K21, K22])

Upvotes: 0

Views: 117

Answers (1)

tharen
tharen

Reputation: 1300

The first argument to leastsq needs to be a callable that receives an array of parameters plus any arguments you pass. In your example foo is the result of np.sum() and not a callable. It's a little hard to follow your example, but this might be a place to start

Instead of this: foo = np.square(Y1exp - f).sum() + np.square(Y2exp - g).sum()

Try something like this:

def foo(K, Y1exp, Y2Exp):
  resid = np.square(Y1exp - f(*K)).sum() + np.square(Y1exp - g(*K)).sum()
  return resid

Assuming f & g are defined as functions elsewhere.

Upvotes: 1

Related Questions