Noob Programmer
Noob Programmer

Reputation: 742

lmfit for plotting parameters

I am trying to do a sensitivity analysis of a simple chemical reaction system. A -> B (with reaction rate k1) and A1 -> B(k2), B-C (k3), C-> B (k4). The point is to plot rates. So if it is a small k1 value, there should be a large k2 value and if k3 and k4 should be equal. For k1 and k2 I get some sort of a trend, but k4 is always the same. What am I missing? Am I varying parameters wrong?

My attempt

import matplotlib.pyplot as plt
import numpy as np
from lmfit import Parameters, report_fit, Minimizer, minimize, printfuncs, conf_interval, conf_interval2d
from scipy.integrate import odeint
from sys import exit
time = 10
Nt = 200
tt = np.linspace(0,time, Nt)   
p = Parameters()
# add with tuples: (NAME VALUE VARY MIN  MAX  EXPR  BRUTE_STEP)
p.add_many(('k1', 0.5, True, 0,1), ('k2', 0.5, True, 0,1), ('k3', 0.5, True, 0,1), ('k4', 0.5, True, 0,1))

time = 10
Nt = 11
tt = np.linspace(0,time, Nt)


def f(xs, t, ps):
    """Test"""
    try:
        k1 = ps['k1'].value
        k2 = ps['k2'].value
        k3 = ps['k3'].value
        k4 = ps['k4'].value
    except:
        k1, k2, k3, k4 = ps

    a, a1, b, c = xs
    da_dt = - k1*a
    da1_dt = - k2*a1
    db_dt = k1*a + k2*a1 + k4*c - k3*b
    dc_dt = k3*b - k4*c
    return da_dt, da1_dt, db_dt, dc_dt

def g(t, x0, ps):
    """
    Solution to the ODE x'(t) = f(t,x,k) with initial condition x(0) = x0
    """
    x = odeint(f, x0, t, args=(ps,))
    return x

def residual(ps, ts, data):
    x0 = np.array([1,0.5,0,0])
    model = g(ts, x0, ps)
    return (model)


#exit()
x0 = np.array([1,0.5,0,0])

k1, k2, k3, k4 = 1,1,1,1
true_params = np.array((k1,k2,k3,k4))
data = g(tt, x0, true_params)
data += ((np.random.normal(size=data.shape))) 

# create Minimizer
mini = Minimizer(residual, p,fcn_args=(tt,data))

# first solve with Nelder-Mead
out1 = mini.minimize(method='emcee')

out2 = mini.minimize(params=out1.params, method='Powell')

print(fit_report(out2))
print(report_fit(out1.params, min_correl=2))
##
ci, trace = conf_interval(mini, out2, sigmas = [0.01,0.02,0.3,0.4],
                                trace=True, verbose=True, maxiter = 200)


figure(1)
x, y, prob = trace['k1']['k1'], trace['k1']['k2'], trace['k1']['prob']
#x, y, prob = trace['k2']['k2'], trace['k2']['k1'], trace['k2']['prob']
scatter(x, y, c=prob ,s=30)
#plt.gca().set_xlim((0.00, 0.1))
#plt.gca().set_ylim((0.01, 0.1))
ylabel('k2')
xlabel('k1')
#plt.show()

figure(2)
x2, y2, prob2 = trace['k3']['k3'], trace['k3']['k4'],trace['k3']['prob']
x2, y2, prob2 = trace['k4']['k4'], trace['k4']['k3'],trace['k4']['prob']
scatter(x2, y2, s= 30, c= prob2)
#plt.gca().set_xlim((0, 0.1))
#plt.gca().set_ylim((0, 0.1))
xlabel('k3')
ylabel('k4')

Upvotes: 0

Views: 233

Answers (1)

M Newville
M Newville

Reputation: 7862

For your example, k3 and/or k4 end up being at or at least very, very close to your upper bound of 1.0. You should use bounds sparingly and be physically meaningful. That is, the real "problem" is that you use Parameters.add_many() and set all parameters to be between [0, 1].

In this sense, the fit did work and even did what you asked ;). But if you relax those bounds, you'll probably get a better result.

FWIW, it is probably not necessary to first solve with Nelder-Mead and then with leastsq (and if you do want to do that, you should spell Nelder correctly0.

Upvotes: 1

Related Questions