Reputation: 910
My code solves a 1st order differential equation using Scipy's solve_ivp
function and the standard RK45
solver. The code itself is reasonably large, and I will certainly attempt to produce a minimal working example if all else fails. However, there might already be a problem at a conceptual that stage, so I am hoping this is not a bug but just my lack of understanding.
The important part of the code is as follows:
nStep = 1000
time_arr = np.arange(nStep)
sol = solve_ivp(rhs_func, (0, self.nStep-1), [x0, ],
t_eval=time_arr, args=(my_args_list,))
I would like to integrate the right hand side on the time interval [0, 999]
, as given by the 2nd argument, and and sample the result at integer values of time within that interval, as is given by the t_eval
keyword argument.
It is my expectation that, internally, the solver would sample the right hand side at arbitrary floating-point times within the provided time interval. For many different values of my_args_list
, it indeed does so. However, for some fairly reasonable argument values it attempts to sample the right hand side severely outside of the provided range. For example, it attempted to sample at 10062.831699320916
, which is severely outside of the integration boundaries.
Am I right to assume that this behaviour is unexpected and is a bug (either of scipy or of my code)?
I would appreciate suggestions on what may be causing such behaviour.
EDIT: I was finally able to produce a minimal example for this bug. I solve a simple Integrator ODE dx/dt = k*[f(t) - x(t)]
for some forcing vector f(t)
. Specifically, I provide a box signal between 0 and 10 seconds, such that f(t) = 2
for 2 <= t <= 4
, and f(t) = 4
everywhere else.
# Libraries
import numpy as np
import matplotlib.pyplot as plt
from numpy.linalg import norm
from scipy.optimize import minimize
from scipy.integrate import odeint, solve_ivp
# Initialization
tmin = 0
tmax = 10
nStep = 1000
time_arr = np.linspace(tmin, tmax, nStep)
x_forcing_low = 1
x_forcing_high = 2
t_break_l = 2
t_break_r = 4
forcing = np.full(nStep, x_forcing_low)
idxs_pulse = np.logical_and(time_arr >= t_break_l, time_arr <= t_break_r)
forcing[idxs_pulse] = x_forcing_high
# Right hand side of a simple Integrator ODE dx/dt = k*[f(t) - x(t)] for some forcing vector f(t)
def dxdt(t, x, param):
if (t < tmin) or (t > tmax):
raise ValueError(f"{t} is out of bounds [{tmin}, {tmax}]")
f_this = np.interp(t, time_arr, forcing)
k, = param
return k*(f_this - x)
# We go over different values of the integration rate, and test if solve_ivp works
kArr = 10 ** np.linspace(-3, 3, 20)
for k in kArr:
try:
sol = solve_ivp(dxdt, (tmin, tmax), [5, ], t_eval=time_arr, args=([k, ],))
print(k, sol['success'])
except ValueError as e:
print(k, e)
The code demonstrates that for k=0.001
the solver solve_ivp
attempts to sample time at t=12.5
, which is out of bounds [0, 10]. However, it works as expected for all other values of k
I have also implemented an analytic solution for this case (omitted for brevity), and it clearly demonstrates that the solver starts progressively breaking down for values of k < 0.1
, producing completely unreasonable solutions. I will produce a separate question, addressing accuracy of the forwards integration scheme, as the problems are likely to be related.
Upvotes: 0
Views: 448