sub_o
sub_o

Reputation: 2802

How to speed up svm.predict?

I'm writing a sliding window to extract features and feed it into CvSVM's predict function. However, what I've stumbled upon is that the svm.predict function is relatively slow.

Basically the window slides thru the image with fixed stride length, on number of image scales.

My OpenCV build was compiled to include both TBB (threading) and OpenCL (GPU) functions.

Has anyone managed to speed up OpenCV's SVM.predict function ?

I've been stuck in this issue for quite sometime, since it's frustrating to run this detection algorithm thru my test data for statistics and threshold adjustment.

Thanks a lot for reading thru this !

Upvotes: 3

Views: 6858

Answers (3)

Cynichniy Bandera
Cynichniy Bandera

Reputation: 6103

As Fred Foo has already mentioned, you have to reduce the number of support vectors. From my experience, 5-10% of the training base is enough to have a good level of prediction.

The other means to make it work faster:

  1. reduce the size of the feature. 3780 is way too much. I'm not sure what this size of feature can describe in your case but from my experience, for example, a description of an image like the automobile logo can effectively be packed into size 150-200:
    • PCA can be used to reduce the size of the feature as well as reduce its "noise". There are examples of how it can be used with SVM;
    • if not helping - try other principles of image description, for example, LBP and/or LBP histograms
  2. LDA (alone or with SVM) can also be used.
  3. Try linear SVM first. It is much faster and your feature size 3780 (3780 dimensions) is more than enough of "space" to have good separation in higher dimensions if your sets are linearly separatable in principle. If not good enough - try RBF kernel with some pretty standard setup like C = 1 and gamma = 0.1. And only after that - POLY - the slowest one.

Upvotes: 0

Fred Foo
Fred Foo

Reputation: 363707

(Answer posted to formalize my comments, above:)

The prediction algorithm for an SVM takes O(nSV * f) time, where nSV is the number of support vectors and f is the number of features. The number of support vectors can be reduced by training with stronger regularization, i.e. by increasing the hyperparameter C (possibly at a cost in predictive accuracy).

Upvotes: 4

Bee
Bee

Reputation: 2502

I'm not sure what features you are extracting but from the size of your feature (3780) I would say you are extracting HOG. There is a very robust, optimized, and fast way of HOG "prediction" in cv::HOGDescriptor class. All you need to do is to

  1. extract your HOGs for training
  2. put them in the svmLight format
  3. use svmLight linear kernel to train a model
  4. calculate the 3780 + 1 dimensional vector necessary for prediction
  5. feed the vector to setSvmDetector() method of cv::HOGDescriptor object
  6. use detect() or detectMultiScale() methods for detection

The following document has very good information about how to achieve what you are trying to do: http://opencv.willowgarage.com/wiki/trainHOG although I must warn you that there is a small problem in the original program, but it teaches you how to approach this problem properly.

Upvotes: 2

Related Questions