Diarmaid O Cualain
Diarmaid O Cualain

Reputation: 302

Tracking of detected faces using OpenCV on iOS

First, a quick bit of background: I am fairly new to iOS and am attempting to detect faces using OpenCV on an iOS device. I was able to get the iOS openCV sample code working fine using the sample code here:

http://docs.opencv.org/doc/tutorials/ios/video_processing/video_processing.html#opencviosvideoprocessing

This results in a useful method that is called for each frame polled from the camera:

- (void)processImage:(Mat&)image;
{
    // Do some OpenCV stuff with the image
    Mat image_copy;
    cvtColor(image, image_copy, CV_BGRA2BGR);

    // invert image
    bitwise_not(image_copy, image_copy);
    cvtColor(image_copy, image, CV_BGR2BGRA);
}

In this example, it successfully inverted the frame from the camera and displays on the device. This is useful, as I can subsitute my won OpenCV C++ code into here for whatever image processing I want to do with the frame.

Now, I wish to get face tracking implemented. There are header files for a detection based tracker in OpenCV 2.4.2 onwards called “opencv2/contrib/detection_based_tracker.hpp”. It defines a class called DetectionBasedTracker. The tracking mechanism it defines uses haar cascades in the background to detect objects. The reason that I wish to use this temporal tracking method rather than frame by frame face detection is that the tracking is much faster than the OpenCV Haar implementation. A guide on how to implement it is demonstrated here: http://bytesandlogics.wordpress.com/2012/08/23/detectionbasedtracker-opencv-implementation/

I had success in implementing this code in C on an Android device. The main code is as follows:

DetectionBasedTracker::Parameters param;
param.maxObjectSize = 400;
param.maxTrackLifetime = 20;
param.minDetectionPeriod = 7;
param.minNeighbors = 3;
param.minObjectSize = 20;
param.scaleFactor = 1.1;

// The object needs to be defined using the constructor with the above 
// declared parameter structure. Then the object.run() method is called 
// to initialize the tracking.
DetectionBasedTracker obj = DetectionBasedTracker("haarcascade_frontalface_alt.xml",param);
obj.run();

And so, for each frame, I can process it to detect the bounding boxes of faces using the lines:

obj.process(gray_frame);
vector< Rect_<int> > faces;
obj.getObjects(faces);

Now, the issue. In Objective C, how to I create the "DetectionBasedTracker obj" object so that it can be used in the "- (void)processImage:(Mat&)image;" method? I do not know what calls the processImage method so I do not now if I can pass it that way. Is there a way to make the "DetectionBasedTracker obj" global? and if so, how would I do that, and is this the correct way of doing it?

Thanks for your help!

Upvotes: 0

Views: 2451

Answers (1)

Since your view controller implementation is in objective-c++ (it says so on your first link) , and if you use as compiler Apple LLVM 2.0 or later, you can use c++ code everywhere in your obj-c++ implementation file (extension .mm). Import you c++ headers and declare all your class-wide variables there rather than in the interface file like so:

#import "ViewController.h"
#import "DetectionBasedTracker.h"


//class extension in your implementation file where your c++ variables go
@interface ViewController()
{
    DetectionBasedTracker myTracker();
}
@end

@implementation ViewController 

#pragma mark - Protocol CvVideoCameraDelegate

- (void)processImage:(Mat&)image;
{
    //...
    myTracker.doSomething();
    //...
}


@end

Upvotes: 0

Related Questions