justHelloWorld
justHelloWorld

Reputation: 6828

Increasing the number of detected features in SIFT will increase precision?

I'm implementing an content-based image retrieval application which involves the Bag of Features model. I'm using cv::SIFT as feature detector.

Anyway, the application performances are not great and I'm trying to improve them from the first step algorithm, which is detecting features.

Reading cv::SIFT::create() documentation I've seen 3 parameters that caught my attention:

  • nfeatures – The number of best features to retain. The features are ranked by their scores (measured in SIFT algorithm as the local contrast)
  • contrastThreshold – The contrast threshold used to filter out weak features in semi-uniform (low-contrast) regions. The larger the threshold, the less features are produced by the detector.
  • edgeThreshold – The threshold used to filter out edge-like features. Note that the its meaning is different from the contrastThreshold, i.e. the larger the edgeThreshold, the less features are filtered out (more features are retained).

This means that increasing the first and third parameters, while decreasing the second, should improve the algorithm precision (with lower time-performances)?

I'm wondering this especially for the first parameters, where for example if we set nfeatures=2000, it's going to detect exactly 2000 features, no matter if they are "interesting" or not. This means that is it going to detect "uninteresting" (so bad) keypoints?

Upvotes: 2

Views: 3760

Answers (1)

Saurav
Saurav

Reputation: 597

i have used SIFT algo in python and at some point of time have researched over it for improving the accuracy. Here are some of the points that i could collate as far as i remember:

  1. The number of "interesting" features will always depend on the object that you are using it to detect. if the object has very random edges, then the key points detected will be more. if the image is simpler (i.e. for example having only 1-2 distinct color and is very distinctive border), then the keypoints detected will be very less. in that case, if you increase the "nfeatures" attributte, there are high chances that false points will be detected and will give you bad results.
  2. assuming that you have very good object image and you get the "2000" key points that you are looking for, changing the other attributes will significantly affect the features as the attributes are used mainly for keypoint localization. you need to play with the parameters for fine tuning but again these might vary from object to object.

As per the official documents, http://docs.opencv.org/3.1.0/da/df5/tutorial_py_sift_intro.html#gsc.tab=0 you can see that there are lot of detected keypoints in the image. so to findout the more "interesting" keypoints, parameters are to be experimented

Also another link that i find very useful if you are looking for mathematical details is: http://www.inf.fu-berlin.de/lehre/SS09/CV/uebungen/uebung09/SIFT.pdf?bcsi_scan_ee7e30f120188340=0&bcsi_scan_filename=SIFT.pdf

This can help you in viewing the results as you change the params and its in MATLAB: http://www.vlfeat.org/overview/sift.html Hope you find this useful for your endevour.

Upvotes: 4

Related Questions