Reputation: 37
I am using C++ and OpenCV 3.3.1
I try to train a SVM with OpenCV my steps are:
And now my problem: Lets say my Images are 128 x 128 and after feature extraction i got a Mat with 16 rows and 128 columns after reshape I got 1 row and 2048 columns, now is the SVM trained with this size of rows and columns. And when I try to predict with my SVM I got the problem that the SVM wants the same size of feature Mat ( 1 row and 2048 colum) but my image for prediciton got more features as the learning images so the Mat for prediction is a way bigger as needed.
The prediction with the same Image as I used for learning works well so i guess the SVM works.
How can I use the SVM for bigger Images?
Upvotes: 0
Views: 876
Reputation: 584
Using SURF/SIFT descriptors by making them 1X 2048 feature is not a very good idea for two reasons:
You are restricting number of useful features for each image(=16) and if number of features is different from 16, you get the error. Even if you force to use 16 features everytime, you might end up loosing features and hence the results will degrade
You are training an SVM classifier for 2048 dimension, without utilizing any relation between extracted feature descriptors.
More robust and standard way of doing this is using Bag of Words. You get K dimensional descriptor from SIFT features using bag of words and histogram approach and then you train SVM classifier on this K dimensional descriptors, wchih will be same of every image.
This link might be helpful for you,
https://www.codeproject.com/Articles/619039/Bag-of-Features-Descriptor-on-SIFT-Features-with-O
If you want to use MATLAB; then vlfeat has the implementation of whole pipeline.
Upvotes: 1