Reputation: 29
I have been working on Kalman filters in recent times. There are surely a lot of terms involved , which need to be understood thoroughly to implement it and optimize it well. I have no clear understanding when it comes to deciding upon the process error covariances, process noise covariance and the measurement noise covariance. Error covariances are still okay to work with as they basically define the uncertainity in the actual state and the assumed/estimated state and the correlation between the uncertainities of the state vector components. These covariances are uodated on each successive iteration and gradually merge to a minimum value as the estimates become more accurate in comparison to the actual state over time.
For process noise and measurement noise covariance matrices, I started with an assumed value of 3 x 3 identity matrix as Q (HIT AND TRIAL). The results didn't look so promising so I tried plugging in this matrix:
Q = [(T^5)/20, (T^4)/8, (T^3)/6;
(T^4)/8, (T^3)/3, (T^2)/2;
(T^3)/6, (T^2)/2, T];
(Found this matrix in some paper, T is the sampling time)
This seemed to work fine and provide good results. It worked, but the reasoning behind it isn't clear to me. Also tried various other matrices like:
Q = 0.0001*diag([0.1 0.1 0.1]);
Even this seemed to give good results. I read at a few places on the net, that choosing an overly large value for Q will result in a misbehaved filter.
Kindly help me on how to choose the 'Q' matrix. Are there some guidelines for the same.
Coming to the measurement noise covariance matrix R, reading up a bit on the net, I decided to choose a calculated for noise covariance as measurement noise covariance. This again resulted in inaccurate results. So, I had to give in to the hit and trial method and ended up choosing R as [1] This works fine for now, nut again I'm not satisfied by this trial and error method of choosing values.
It'll be great, If someone can help me with the clarifications.
Thanks
Upvotes: 0
Views: 1856
Reputation: 1
If you are still interested in the question, here is the answer. While real object dynamics, that you are tracking with Kalman filter, correspond dynamics of your filter (that is written in matrix A), you don't need covariance matrix Q at all. In that case gain coefficients of your filter decrease from step to step. That's right because filter knows your object from step to step better and better and finally doesn't need measurements at all. But! If object dynamics differ from matrix A, than filter lag error increases from step to step on the same reason. Matrix Q solves this problem. Q states for expected difference between real object dynamics and matrix A. For instance, matrix A equals
for two-dimensional state. If the object you are tracking accelerates, expected difference equals
Therefore, additional prediction error equals
That is matrix Q. I hope it helps you.
Upvotes: 0