salgarcia
salgarcia

Reputation: 517

Tracking in video sequence with user-defined target to track

I have a project to create an application where the user can draw of region of interest (in this example, a rectangle around a vehicle to track) and it will automatically track the vehicle in the subsequent frames of the recorded video.

The method I have implemented so far using OpenCV is as follows:

(1) Get the user defined rectangle (Region of interest) from theinitial_frame enter image description here

(2) UsegoodFeaturesToTrackon the region of interest and store theinitial_features enter image description here

(3) Step through next frames in the video 3.1: Get next_frame 3.2: Call calcOpticalFlowPyrLK(prevImg, nextImg, prevPts, nextPts,...) *where prevImg is always initial_frame and prevPts is always initial_featues and each time I only update nextImg with the next frame of video 3.3: Get Bounding Rectangle for newly found features from nextPts 3.4: Display frame with bounding rectangle enter image description here

This method works in most of the 50 consecutive frames, except for a few times the tracking results in something like this:

enter image description here

but beyond 50 frames, the results become less and less accurate:

enter image description here

It does makes sense that the features found in the original image become less and less prevalent in the subsequent frames, so I am looking for ideas on how to improve this method of tracking or maybe finding a better method altogether.

One that has come up is using a Kalman Filter, however I do not have an idea of what parameters to use for the measurement and dynamic parameters, and how to update the measurements from the features found in the optical flow. I'm open to any suggestions or even entirely different methods for object tracking in this kind of application.

*Note: This function is what I use to get the bounding rectangle of the array of features returned from the optical flow (I'm using EMGUCV here):

    public Rectangle RectFromPoints(List<PointF> points)
    {
        using (MemStorage stor = new MemStorage())
        {
            Contour<PointF> contour = new Contour<PointF>(stor);

           // Remove points far outside the major grouping of all the other points
            var newPoints = RemoveOutlierPoints(points); 

            foreach(PointF pnt in newPoints)
            {
                contour.Push(pnt);
            }
            var contPoly = contour.ApproxPoly(3, stor);

            var rect = contPoly.BoundingRectangle;
            return rect;
        }
    }

Upvotes: 4

Views: 2173

Answers (3)

eyebies
eyebies

Reputation: 39

OpenCV 3.0 contrib has got a couple of trackers available (TLD, MEDIANFLOW, BOOSTING and MIL). You can find a CPP example as well. Performance comparision state of art trackers is available on the VOTchallenge page.

Upvotes: 1

tektok tek
tektok tek

Reputation: 66

Good day. What you have is a drift problem. The features you used for the first time are only for the object. but as the new frame update the the old template some part of the background clutter included with the image as the new template. you need to update your template with similar template and there are algorithms like BMR-adjustment.

Upvotes: 0

Davis King
Davis King

Reputation: 4791

Try using a correlation tracker. The modern ones are pretty good, for example: https://www.youtube.com/watch?v=-8-KCoOFfqs. You can also get the code for this in dlib.

Upvotes: 1

Related Questions