freestatelabs
freestatelabs

Reputation: 1

How to avoid accumulated error over time and "drift" in numerical time integration?

I'm coding a few programs to study structural dynamics problems, and I'm trying to understand how to estimate and minimize accumulated error over time for large-time scale calculations.

An explicit time integrator uses the knowledge of the current state to compute the next state. But it's an approximation so each calculation of the next state causes some error that is propagated to following states, which compounds over future calculations. Implicit time integrators improve accuracy with fewer time steps needed, but at the cost of increased calculation requirements per time step.

I've used software like AGI's STK which can calculate very accurate, complicated orbits over many years. I wasn't able to find specific theory information by searching their online documentation, but I'm very much curious as to how this type of thing is done in practice. Regardless if the time step chosen for this analysis is on the order of seconds or minutes, the overall calculation ends up requiring many, many time steps, which I assume involves a lot of accumulated error.

There are higher-order methods (i.e. RK, leapfrog, etc.), which minimize error at each time step, but for truly long-scale calculations, how is this done at scale?

Upvotes: 0

Views: 591

Answers (0)

Related Questions