Reputation: 1971
I am using the ode
solver from scipy.integrate
to solve my differential equation. I wanted to see whether the final result was affected by the choice of the step-size in the integration dt
, and this is what I get:
I would expect the result to be correct for any small integration step-size, but I see almost the opposite... does anyone have an idea of what's going on?
--
Code:
from scipy.integrate import ode
import matplotlib.pyplot as plt
import numpy as np
y0, t0 = 0., 0
def f2(t, y, arg1):
t0 = 5
shape = np.piecewise(t, [t<t0, t>=t0], [arg1*t, arg1*(2*t0-t)])
return shape
t1 = 10.
dtlist = np.array([t1/j for j in range(1, 50)])
temp = []
for i in dtlist:
dt = i
r = ode(f2).set_integrator('zvode', method='bdf')
r.set_initial_value(y0, t0).set_f_params(2.0)
while r.successful() and r.t < t1:
r.integrate(r.t+dt)
temp.append(r.y)
fig, axs = plt.subplots(1, 1, figsize=(15, 6), facecolor='w', edgecolor='k')
xaxis = dtlist
yaxis = temp
axs.plot(xaxis, yaxis, 'bo', label = 'obtained')
axs.plot(xaxis, xaxis - xaxis + 50, 'r--', label = 'expected')
axs.set_xlabel('time increment $\delta t$')
axs.set_ylabel('Final $y$ value')
axs.legend(loc = 4)
Upvotes: 1
Views: 420
Reputation: 25992
You need to ensure that the last integration step always ends at t1
. Due to floating point errors it may happen that the desired last integration step goes to r.t
slightly less than t1
so that an additional unwanted step to about t1+dt
is performed. You could use
r.integrate(min(r.t+dt, t1))
to cut off the integration at the right time. Or do something more involved to catch the case where r.t+dt
is already close to t1
.
Upvotes: 1