Shahgee
Shahgee

Reputation: 3405

Matching with SIFT (Conceptual)

I have two images of real world. (IMPORTANT)I approximately know transformation of one real world to another. Due to texture problem I don't get enough matches between two images. How can I bring transformation information into account to get more and correct matches by using SIFt. Any idea will be helpful.

Upvotes: 1

Views: 1194

Answers (4)

NKN
NKN

Reputation: 6424

The first step I think is to try with the settings of the SIFT algorithm to find the best efficiency with respect to your problem.

One another way to use SIFT more effectively is adding the COLOR information to SIFT. So you can add the color information (RGB) of the points which are being used in the descriptor to it. For instance if your descriptor size is 10x128 then it shows that you are using 10 points in each descriptor. Now you can extract and add three column and make the size 10x(128+3) [R-G-B for each point]. In this way the SIFT algorithm will work more efficient. But remember, you need to apply weight to your descriptor and make the last three columns be stronger than the other 128 columns. Actually I do not know in your case how the images are. but this method helped me a lot. and you can see that this modification makes SIFT a stronger method than before. A similar implementation can be find here.

Upvotes: 0

farzin parsa
farzin parsa

Reputation: 547

There is another alternative:

In sift parameters, Contrast Threshold is set to 0.04. If you reduce it and set it to a lower value ( 0.02,0.01) SIFT would find more enough matches:

SIFT(int nfeatures=0, int nOctaveLayers=3, double contrastThreshold=0.04, double edgeThreshold=10, double sigma=1.6)

Upvotes: 0

peakxu
peakxu

Reputation: 6675

If you know the transform, then apply the transform and then apply SURF/SIFT to the transformed image. That's one standard way to extend the robustness of feature descriptors/matchers across large perspective changes.

Upvotes: 1

Throwback1986
Throwback1986

Reputation: 6005

Have you tried other alternatives? Are you sure SIFT is the answer? First, OpenCV provides SIFT, among other tools. (At the moment, I can't speak highly enough of OpenCV).

If I were solving this problem, I would first try:

  1. Downsample your two images to reduce the influence of "texture", i.e. cvPyrDown.
  2. Perform some feature detection: edge detection, etc. OpenCV provides a Harris corner detector, among others. Google "cvGoodFeaturesToTrack" for some detail.
  3. If you have good confidence in your transformations, take advantage of your a priori information and look for features in neighborhoods corresponding to the transformed locations.

If you still want to look at SIFT or SURF, OpenCV provides those capabilities, as well.

Upvotes: 1

Related Questions