Reputation: 1
I try to match multi-object with rotation using a simple template like a smile face template ,and I wanna detect it in the test image like test image
I have tried to using Features2D and Homography to detect, however there are many problems.
P1: It seems this keypoints matching method is not accurate for SIMPLE template(I have tried this method in another template which is much more complicated, the matching result is better). Is there any method on this problem?
P2: Definitely this method is not suitable in multi-object test image. How could I match multiple objects using a single template?(the premise is I don't know the number and location of objects in the template)
Below is my function code.
`//load image
Mat img1 = imread( "2.png", CV_LOAD_IMAGE_GRAYSCALE );
Mat img2 = imread( "1.png", CV_LOAD_IMAGE_GRAYSCALE );
//-- Step 1: Detect the keypoints using SURF Detector
SurfFeatureDetector detector( hessian );
vector<KeyPoint> keypoints1, keypoints2;
detector.detect( img1, keypoints1 );
detector.detect( img2, keypoints2 );
//-- Step 2: Extract the keypoints using SURF Extractor
Mat descriptors1,descriptors2;// extract keypoints
SurfDescriptorExtractor extractor; //Create Descriptor Extractor
extractor.compute( img1, keypoints1, descriptors1 );
extractor.compute( img2, keypoints2, descriptors2 );
//-- Step 3: Matching descriptor vectors using FLANN matcher
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
//-- Draw only "good" matches
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ )
{ if( matches[i].distance < 3*min_dist )
{ good_matches.push_back( matches[i]); }
}
Mat img_matches;
drawMatches( img_object, keypoints_object, img_scene, keypoints_scene,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, CV_RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( img_object.cols,0 );
obj_corners[2] = cvPoint( img_object.cols, img_object.rows ); obj_corners[3] = cvPoint( 0, img_object.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( img_object.cols, 0), scene_corners[1] + Point2f( img_object.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( img_object.cols, 0), scene_corners[2] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( img_object.cols, 0), scene_corners[3] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( img_object.cols, 0), scene_corners[0] + Point2f( img_object.cols, 0), Scalar( 0, 255, 0), 4 );
`
I am a beginner in computer-vision,and it is my first time asking on this forum. Many thanks for your help!
Upvotes: 0
Views: 970
Reputation: 509
If your problem is to detect only that kind of images, a simple thing that you can do is to use a circle detector. And you can group the point of the bigger circle (head) and the points of the eyes. If you know the position of the centroids of those 3 circles, you can have the position and rotation of the face by studying where are the eyes.
In the image, the red points represent the centroids of the circles, you can get the head position by finding where the main centroid is, alpha is the angle between the right eye and the main centroid. If you can find the new angle you can compute theta which will indicate the rotation of the face, and maybe this could work even scale changes
Upvotes: 1