carmichael561
carmichael561

Reputation: 73

Scipy odeint Non-negative solution

Apparently, getting a non-negative solution from an ODE solver is non-trivial. In Matlab, there is the NonNegative option for certain solvers to get a non-negative solution. Is there a similar option in scipy?

If not, what is the "best" way of imposing a non-negativity constraint? At the moment, I have something like the following:

def f(x, t, params):
     ... ... ...
     ... ... ...
     x_dot[(x <= 0) * (x_dot <= 0)] = 0.0
     return x_dot
... ... ...
x = odeint(f, x0, t, args=params)

However, this leads to numerical instabilities. I've needed to set mxstep to 1e8 and hmin=1e-15.

Upvotes: 5

Views: 2849

Answers (2)

travelingbones
travelingbones

Reputation: 8408

I am using scipy's odeint and my model has some interesting constraints akin to yours--x[0] is a population, so it must be non-negative, and $0\leq x[1], x[2], 1-x[1]-x[2]$ is required. Sometimes the ODE solver will step in such a way that these constraints are not respected. In my case I wrote my own ODE solver (very simple) and included needed adjustments. In the example below, you can replace the code beneath ## now fix problems and above ### DONE with fix to your scenario. Then this is called just like odeint (but only with the restricted arguments)

def euler_sovler(ode_function, x, ts, args = (), eta = .01): 
    # eta = stepsize 
    xs = []
    for t in ts: 
        dx = np.array(attacked_ode2(x,t, *args))
        x = x + eta*dx

        ## now fix problems arising from the previous step: 
        x[0] = max(x[0], 0) ## population can't be negative 
        D = 1-x[1]-x[2]
        d = np.array([x[1], x[2], D])
        if min(d)<0: 
            d-= min(d) ## shift all of them up so they are non-negative 
        d/= sum(d) ## make them sum to 1
        ## restore ag as x1, n as x2:
        x[1] = d[0]
        x[2] = d[1] 
   ### DONE with fix. 
        xs.append(list(x))
        if np.abs(x[0])<.001: break ## if all the humans die, we can stop the simulation 
    return np.array(xs)

Upvotes: 0

RHC
RHC

Reputation: 340

The problem is not merely that you have to avoid square rooting the negative x. The problem is that the "best" way of imposing the constraint still depends on what your system's application is and what behavior you assume to be "reasonable". If your system does not have an equilibrium at 0 then your problem may be ill-posed. What could be the meaning for it to move at non-zero speed into the negative-x domain? If the interpretation is that the solution should stay at zero, then you actually no longer have an ODE system as your intended model: you have a hybrid dynamical system with a non-smooth component, i.e. when the trajectory x(t) hits 0 at t = t_1, it must stay at x(t) for all t > t_1. This can be easily achieved with a proper dynamical systems package such as PyDSTool.

Alternatively, x=0 is a stable equilibrium and you simply need to prevent evaluation of f for x<0. This can also be hacked with event detection.

In either case, event detection at x=0 is tricky when your f is undefined for x<0. There are few standard ODE solvers that can literally be forced to avoid evaluating in a sub-domain under all circumstances, and most event detection will involve evaluations on either side of a boundary. A pragmatic solution is to choose a small number for x below which it is safe (in the context of your application) to declare x = 0. Then make the event detect when x reaches that (which, given that you can control the step size to stay small enough) should prevent x ever being evaluated at a negative value. Then you'd make a condition to make x = 0 after that point, if that is the behavior you want. Again, that's a bit of fuss in scipy/python but you can do it. It's also fairly easy to set up the desired behavior in PyDSTool, which I'd be willing to advise you on if you post in its help forums.

Upvotes: 1

Related Questions