Reputation: 1
I would like to use the kalman filter implementation that I have attached for tracking objects that flow in a video. In the video, objects are in motion, some come out of the boundaries of the video and others enter it. instead of tracking the new elements, the algorithm moves the tracks from the old objects to the new ones. how can I solve this problem?
Here an example image to explain better, if I haven't been sufficiently clear
class KalmanFilter(object):
def __init__(self):
self.dt = 0.005 # delta time
self.A = np.array([[1, 0], [0, 1]]) # matrix in observation equations
self.u = np.zeros((2, 1)) # previous state vector
# (x,y) tracking object center
self.b = np.array([[0], [255]]) # vector of observations
self.P = np.diag((3.0, 3.0)) # covariance matrix
self.F = np.array([[1.0, self.dt], [0.0, 1.0]]) # state transition mat
self.Q = np.eye(self.u.shape[0]) # process noise matrix
self.R = np.eye(self.b.shape[0]) # observation noise matrix
self.lastResult = np.array([[0], [255]])
def predict(self):
# Predicted state estimate
self.u = np.round(np.dot(self.F, self.u))
# Predicted estimate covariance
self.P = np.dot(self.F, np.dot(self.P, self.F.T)) + self.Q
self.lastResult = self.u # same last predicted result
return self.u
def correct(self, b, flag):
if not flag: # update using prediction
self.b = self.lastResult
else: # update using detection
self.b = b
C = np.dot(self.A, np.dot(self.P, self.A.T)) + self.R
K = np.dot(self.P, np.dot(self.A.T, np.linalg.inv(C)))
self.u = np.round(self.u + np.dot(K, (self.b - np.dot(self.A,
self.u))))
self.P = self.P - np.dot(K, np.dot(C, K.T))
self.lastResult = self.u
return self.u
Upvotes: 0
Views: 765
Reputation: 7121
If you are not bound to the Kalman filter, there is another solution to this issue. Usually a problem like this is solved using the optical flow of features in a sequence of images.
An example in OpenCV can be found here.
The idea is to find significant features to track in your image (circles, corners, etc). This can be done using a convenience function to start with and you might not know what to use:
p0 = cv2.goodFeaturesToTrack(old_gray, mask = None, **feature_params)
Then you pass the features to the optical flow function, together with two consecutive images and it will do some magic:
p1, st, err = cv2.calcOpticalFlowPyrLK(old_gray, frame_gray, p0, None, **lk_params)
In your case you mentioned you would like to use an ID. I have no idea how your data looks like, but I suggest that you use the goodFeaturesToTrack
function on your image and see what is returns. Then you get an idea of what could be used to track.
Afterwards you could use a specific function that extracts your favourite features. Say you find a function that extracts kiwi features from an image, maybe ellipses. Say one ellipse per Kiwi. Then you put that list into the tracking function and it tells you where they went to in the next image. The list has a fixed order that you could use as an ID.
The OpenCV docs for the function can be found at cv2.calcOpticalFlowPyrLK()
Upvotes: 0