Reputation: 11
So, I'm writing a program to fit a 5-parameter function (s,t,k,h,b) to a set of experimental data. Well, the program works but I need to attend a condition: 0<k<h<1. I just need to add this inequality to the fitting process. Anyone could help?
The algorithm is:
import xlrd
import numpy
import math
import scipy
from scipy.optimize import curve_fit
workbook = xlrd.open_workbook('Pasta3.xls')
sheet = workbook.sheet_by_name('teste')
x = []
for i in range (18):
cell = sheet.cell_value(i+1,1)
x.append(cell)
y = []
for i in range (18):
cell_=sheet.cell_value(i+1,0)
y.append(cell_)
def Re(x,s,t,k,h,b):
return 130 + ((38000-130)*(1+s*((2*math.pi*x*t)**(-1*k))*math.cos(k*math.pi/2)+ ((2*math.pi*x*t)**(-1*h))*math.cos(h*math.pi/2)))/((((38000-130)*(1+s*((2*math.pi*x*t)**(-1*k))*math.cos(k*math.pi/2)+((2*math.pi*x*t)**(-1*h))*math.cos(h*math.pi/2)))/(38000-130))**2+(((38000-130)*(s*((2*math.pi*x*t)**(-1*k))*math.sin(k*math.pi/2)+((2*math.pi*x*t)**(-1*h))*math.sin(h*math.pi/2)+((2*math.pi*x*t*b)**(-1))))/(38000-130))**2)
popt, _ = curve_fit(Re,x,y)
print(popt)
Upvotes: 1
Views: 532
Reputation: 7862
One way to add inequality constraints would be to use the lmfit
library. Among its features, is the ability to impose constraints as mathematical expressions. Combined with simple upper and lower bounds, that allows inequality constraints.
For your example, converting to using lmfit (and maybe a little bit of cleanup to make sure everything is a numpy ndarray) would give something like this:
import xlrd
from numpy import array, pi, sin, cos
from lmfit import Model
workbook = xlrd.open_workbook('Pasta3.xls')
sheet = workbook.sheet_by_name('teste')
x = []
for i in range (18):
cell = sheet.cell_value(i+1,1)
x.append(cell)
y = []
for i in range (18):
cell_=sheet.cell_value(i+1,0)
y.append(cell_)
x = array(x)
y = array(y)
def Re(x, s, t, k, h, b):
xt = 2*pi*xt
sxtk = s*xt**(-k)
xth = xt**(-h)
scale= 38000-130
term1 = 1 + sxtk*cos(k*pi/2) + xth*cos(h*pi/2)
term2 = sxtk*sin(k*pi/2) + xth*sin(h*pi/2)+(1/(xt*b))
return 130 + scale**2 * term1/ (term1 + term2)
model = lmfit.Model(Re)
params = model.make_params(s=0.5, t=0.5, k=0.2, h=0.5, b=0.5)
result = model.fit(y, params, x=x)
print(result.fit_report())
A few important notes:
readability counts. Really, it matters more than anything else, which is why you chose Python. No one - including you - will understand the mess that is your fitting function. And that makes it completely useless: Even if it gives a good fit, you will not know what it did. I did some refactoring and simplifying, but of course you would want to check that.
your example did not give initial values for the fitting parameters. scipy.optimize.curve_fit()
silently allows this, giving values of 1 to all parameters. This is a terrible mis-feature and causes trouble for many users. Yes, this appears to be extra work, but fitting methods require initial values. You always need to give initial values that make sense for your data. There are no exceptions.
In the example with lmfit
initial values are made for each of the parameters. Here, they are all set to values between 0 and 1, but you will need to set these to reasonable initial values for your data.
To set bounds on the parameters, you could do
params['k'].min = 0
params['k'].max = 1
and so forth. To say that h > k
, you would define a new parameter -- I'll call it h_minus_k
, and then use a mathematical expression to set h
to be computed from that value and k
:
params.add('h_minus_k', value=0.1, min=0, max=1)
params['h'].expr = 'k + h_minus_k'
Now, since h_minus_k
cannot be negative, h>k
will be maintained. There are still 5 variables in the fit, now s
, t
, k
, b
, and h_minus_k
. h
will vary in the fit but is not freely varying -- its value is constrained by the values of k
and h_minus_k
. With that constraint, you can try the fit again and compare the results:
params = model.make_params(s=0.5, t=0.5, k=0.2, h=0.5, b=0.5)
params['k'].min = 0
params['k'].max = 1
params['h'].min = 0
params['h'].max = 1
params.add('h_minus_k', value=0.1, min=0, max=1)
params['h'].expr = 'k + h_minus_k'
result2 = model.fit(y, params, x=x)
print(result2.fit_report())
Upvotes: 1