Reputation: 73
I am running a fairly large system of ODEs using scipy's odeint. In order to keep the system coherent, easy to read and shorter I have divided most of it into classes to call upon each other. From the readability point of view it seems to work fine, however due to repeated calls to python objects it significantly slows down the actual ODE solver. I looked around into ways of optimizing the code to maintain some level of readability while also making it more efficient at calculations (either using Numba or using Sympy before converting it to numpy-aware lambda functions), however I have not been very successful so far. I was wondering what strategies could be helpful in this case. I have provided a simple version of the code below. Thank you in advance.
class NaL():
g = 0
def I(self, v, E): return self.g * (v - E) #Leak current
class Neuron():
C_m = 1.0 #membrane capacitance
V = 0 # Voltage
m, h = 0, 0 #activating variables
def I_Na(self): #Sodium Currents
return self.NaT.I(self.V, self.m, self.h)+ self.NaL.I(self.V)
def __init__(self):
self.NaT = NaT()
self.NaL = NaL()
self.NaT.g = 3000
self.NaL.g = 20
I1 = Neuron()
def Run(X,t):
I1.V, I1.m, I1.h = X
dydt = [(0 - I1.I_Na()) / I1.C_m, #dV/dt for neuron
I1.NaT.dmdt(I1.V, I1.m), #dm/dt for sodium channel
I1.NaT.dhdt(I1.V, I1.h) #dh/dt for sodium channel
]
return dydt
X = odeint(Run, [-70, 0.0050, 0.9961], t)
Upvotes: 1
Views: 2483
Reputation: 4405
The bottleneck of your integration is almost certainly evaluation the derivative Run
, as this has to be called multiple times for every step of integration. Right now, you have a considerable Python overhead in there due to all the class members and function calls.
One solution for this is using the module JiTCODE by yours truly, where instead of defining a function to be used by the integrator, you define the differential equations symbolically (with SymPy). They are then translated to C code, which is compiled to a Python function, which in turn is used for integrating. This way, your classes and class functions are only called a few times before the actual integration and the integration is as efficient as it would be if you defined your derivative without classes and similar. (Also, you may get a considerable speed boost through compilation, in particular if you have many neurons.)
As it stands, you translating your example to JiTCODE is straightforward:
from jitcode import jitcode, provide_basic_symbols
t, y = provide_basic_symbols()
class NaL():
[…]
class Neuron():
[…]
I1 = Neuron()
I1.V, I1.m, I1.h = y(0), y(1), y(2)
Run = [
(0 - I1.I_Na()) / I1.C_m,
I1.NaT.dmdt(I1.V, I1.m),
I1.NaT.dhdt(I1.V, I1.h)
]
t = range(100)
ODE = jitcode(Run)
ODE.set_initial_value([-70,0.0050,0.9961], t[0])
X = np.vstack(ODE.integrate for time in t[1:])
This code does not work right now for the same reasons that your example code does not work. Note that all the arithmetics in your class that involves dynamical variables becomes symbolic. With additions and multiplications, this works out of the box, but if you have something like math.sin
or sympy.sin
, you have to replace it by sympy.sin
.
Upvotes: 1