Sm1
Sm1

Reputation: 570

What are the features in feature detection algorithms and other doubts

I am going through feature detection algorithms and a lot of things seems to be unclear. The original paper is quite complicated to understand for beginners in image processing. Shall be glad if these are answered

  1. What are the features which are being detected by SURF and SIFT?
  2. Is it necessary that these have to be computed on gray scale images?
  3. What does the term "descriptor" mean in simple words.
  4. Generally,how many features are selected/extracted?Is there a criteria for that?
  5. What does the size of Hessian matrix determine?
  6. What is the size of the features being detected?It is said that the size of a feature is the size of the blob.So, if size of image is M*N so will there be M*N n umber of features?

These questions may seem too trivial, but please help..

Upvotes: 2

Views: 1433

Answers (1)

Abid Rahman K
Abid Rahman K

Reputation: 52646

I will try to give an intuitive answer to some of your questions, I don't know answers to all.

(You didn't specify which paper you are reading)

What are the features and how many features are being detected by SURF and SIFT?

Normally features are any part in an image around which you selected a small block. You move that block by a small distance in all directions. If you find considerable variations between the one you selected and its surroundings, it is considered as a feature. Suppose you moved your camera a little bit to take the image, still you will detect this feature. That is their importance. Normally best example of such a feature is corners in the image. Even edges are not so good features. When you move your block along the edge lines, you don't find any variation, right?

Check this image to understand what I said , only at the corner you get considerable variation while moving the patches, in other two cases you won't get much.

enter image description here

Image link : http://www.mathworks.in/help/images/analyzing-images.html

A very good explanation is given here : http://aishack.in/tutorials/features-what-are-they/

This the basic idea and the algorithms you mentioned make this more robust to several variations and solve many issues. (You can refer their papers for more details)

Is it necessary that these have to be computed on gray scale images?

I think so. Anyway OpenCV works on grayscale images

What does the term "descriptor" mean in simple words?

Suppose you found features in one image, say image of a building. Now you took another image of same building but from a slightly different direction. You found features in the second image also. But how can you match these features. Say feature 1 in image 1 match to which feature in image 2 ? (As a human, you can do easily, right ? This corner of building in first image corresponds to this corner in second image, so and so. Very easy).

Feature is just giving you pixel location. You need more information about that point to match it with others. So you have to describe the feature. And this description is called "descriptors". To describe this features, algorithms are there and you can see it SIFT paper.

Check this link also : http://aishack.in/tutorials/sift-scale-invariant-feature-transform-introduction/

Generally,how many features are selected/extracted?Is there a criteria for that?

During processing you can see applying different thresholds, removing weak keypoints etc. It is all part of plan. You need to understand algorithm to understand these things. Yes, you can specify these threshold and other parameters (in OpenCV) or you can leave it as default. If you check for SIFT in OpenCV docs, you can see function parameters to specify number of features, number of octave layers, edge threshold etc.

What does the size of Hessian matrix determine?

That I don't know exactly, just it is a threshold for keypoint detector. Check OpenCV docs : http://docs.opencv.org/modules/nonfree/doc/feature_detection.html#double%20hessianThreshold

Upvotes: 7

Related Questions