Reputation: 1
I'm implementing an adaptive Kalman filter using Maximum Likelihood Estimation (MLE) to update the process noise covariance matrix Q and measurement noise covariance matrix R. The formulas for updating Q and R are shown in the image(enter image description here) .
Code: Here's the key part of my implementation:
function \[x_updated, P_updated, Q_next, R_next, memory\] = adaptive_kalman_filter(x_prev, P_prev, Q_prev, R_prev, z, memory, Phi, H, N)
% Prediction step
x_predicted = Phi \* x_prev;
P_predicted = Phi \* P_prev \* Phi' + Q_prev;
% Update step
K = P_predicted * H' / (H * P_predicted * H' + R_prev);
x_updated = x_predicted + K * (z - H * x_predicted);
P_updated = (eye(size(P_predicted)) - K * H) * P_predicted;
% Calculate residual and state correction
m_k = z - H * x_predicted;
delta_x_k = x_updated - x_predicted;
% Update memory
memory.m = [memory.m, m_k];
memory.delta_x = [memory.delta_x, delta_x_k];
% Keep the latest N data points
if size(memory.m, 2) > N
memory.m = memory.m(:, end-N+1:end);
memory.delta_x = memory.delta_x(:, end-N+1:end);
end
% Update Q and R
current_N = size(memory.m, 2);
if current_N > 0
R_next = cov(memory.m') - H * P_predicted * H';
Q_next = cov(memory.delta_x') + P_updated - Phi * P_prev * Phi';
else
R_next = R_prev;
Q_next = Q_prev;
end
end
What I've tried:
I adjusted the initial values of Q and R, but the problem persists.
I used a sliding window to limit the number of residuals and state corrections used in the MLE update, but this didn't help.
Expected result: I want to stabilize Q and R so that they remain positive and converge to reasonable values, allowing the adaptive Kalman filter to accurately track the system state.
Upvotes: 0
Views: 32
Reputation: 1
After further reflection, I realized that the poor performance of the adaptive Kalman filter based on maximum likelihood estimation (MLE) is fundamentally due to a key reasons:
Window Length Issue:
1.If the window length is too small, it becomes impossible to estimate the covariance matrices accurately.
2.Even with a large window length, the algorithm is prone to divergence.
3.In the early stages, when data is insufficient, once the algorithm starts diverging, it becomes irrecoverable later on, even with a large window length.
Proposed Solution A better approach would be to:
1.Set a larger window length to improve the accuracy of covariance estimation.
2.In the early stages, when data is limited, set the noise covariance matrices as constants based on empirical knowledge.
3.However, this solution increases computational overhead.
This is my understanding and proposed solution to the issue I previously encountered.
Upvotes: 0