Ben
Ben

Reputation: 317

2D Point Set Matching

What is the best way to match the scan (taken photo) point sets to the template point set (blue,green,red,pink circles in the images)? I am using opencv/c++. Maybe some kind of the ICP algorithm? I would like to wrap the scan image to the template image!

template point set: template image

scan point set: scan image

Upvotes: 2

Views: 3865

Answers (4)

Vlad
Vlad

Reputation: 4525

Follow these steps:

  1. Match points or features in two images, this will determine your wrapping;
  2. Determine what transformation you are looking for for your wrapping. The most general would be homography (see cv::findHomography()) and the less general would be a simple translation (use cv::matchTempalte()). The intermediate case would be translation along x, y and rotation. For this I wrote a fast function that is better than Homography since it uses less degrees of freedom while still optimizing the right metrics (squared differences in coordinates): https://stackoverflow.com/a/18091472/457687
  3. If you think your matches have a lot of outliers use RANSAC on top of your step 1. You basically need to randomly select a minimal set of points required for finding parameters, solve, determine inliers, solve again using all inliers, and then iterate trying to improve your current solution (increase the number of inliers, reduce error, or both). See Wikipedia for RANSAC algorithm: http://en.wikipedia.org/wiki/Ransac

Upvotes: 0

mevatron
mevatron

Reputation: 14021

Have you looked at OpenCV's descriptor_extractor_matcher.cpp sample? This sample uses RANSAC to detect the homography between the two input images. I assume when you say wrap you actually mean warp? If you would like to warp the image with the homography matrix you detect, have a look at the warpPerspective function. Finally, here are some good tutorials using the different feature detectors in OpenCV.

EDIT : You may not have SURF features, but you certainly have feature points with different classes. Feature based matching is generally split into two phases: feature detection (which you have already done), and extraction which you need for matching. So, you might try converting your features into a KeyPoint and then doing the feature extraction and matching. Here is a little code snippet of how you might go about this:

typedef int RED_TYPE = 1;
typedef int GREEN_TYPE = 2;
typedef int BLUE_TYPE = 3;
typedef int PURPLE_TYPE = 4;

struct BenFeature
{
    Point2f pt;
    int classId;
};

vector<BenFeature> benFeatures;

// Detect the features as you normally would in addition setting the class ID

vector<KeyPoint> keypoints;
for(int i = 0; i < benFeatures.size(); i++)
{
    BenFeature bf = benFeatures[i];
    KeyPoint kp(bf.pt,
                10.0, // feature neighborhood diameter (you'll probaby need to tune it)
                -1.0,  // (angle) -1 == not applicable
                500.0, // feature response strength (set to the same unless you have a metric describing strength)
                1, // octave level, (ditto as above)
                bf.classId // RED, GREEN, BLUE, or PURPLE.
                );
    keypoints.push_back(kp);
}

// now proceed with extraction and matching...

You may need to tune the response strength such that it doesn't get thresholded out by the extraction phase. But, hopefully that's illustrative of what you might try to do.

Upvotes: 0

Niki
Niki

Reputation: 15867

Do you have to match the red rectangles? The original image contains four black rectangles in the corners that seem to be made for matching. I can reliably find them with 4 lines of Mathematica code:

lotto = [source image]
lottoBW = Image[Map[Max, ImageData[lotto], {2}]]

This takes max(R,G,B) for each pixel, i.e. it filters out the red and yellow print (more or less). The result looks like this:

bw filter result

Then I just use a LoG filter to find the dark spots and look for local maxima in the result image

lottoBWG = ImageAdjust[LaplacianGaussianFilter[lottoBW, 20]]
MaxDetect[lottoBWG, 0.5]

Result: enter image description here

Upvotes: 1

WebMonster
WebMonster

Reputation: 3031

If the object is reasonably rigid and aligned, simple auto-correlation would do the trick. If not, I would use RANSAC to estimate the transformation between the subject and the template (it seems that you have the feature points). Please provide some details on the problem.

Edit: RANSAC (Random Sample Consensus) could be used in your case. Think about unnecessary points in your template as noise (false features detected by a feature detector) - they are the outliners. RANSAC could handle outliners, because it choose a small subset of feature points (the minimal amount that could initiate your model) randomly, initiates the model and calculates how well your model match the given data (how many other points in the template correspond to your other points). If you choose wrong subset, this value will be low and you will drop the model. If you choose right subset it will be high and you could improve your match with an LMS algorithm.

Upvotes: 1

Related Questions