Addee
Addee

Reputation: 671

HOG descriptor is rotation invariant?

I am working grass weed detection. I have started to extract features from the HoG descriptor. As studied from the HoG literature that the HoG is not rotation invariant. I have total 18 images of each class of grass weed and there is two classes. In my training and testing database I have rotated each image [5 10 15 20 ... 355] degree.

training and testing is done using LibSVM package. and I am getting accuracy of about 80%.

My question is if the HoG is not rotation invariant then how can I get such high accuracy?

Upvotes: 2

Views: 2724

Answers (1)

MHICV
MHICV

Reputation: 106

First thing first, for a rotationally invariant descriptor D you have :
D(image) ~= D(image_5) ~= D(image_X)
X : the angle of rotation

By operator ~= we mean that the distance between the compared features is small.

As a consequence, for a rotationnaly invariant descriptor D, you don't have to add to your training set the rotated version of your image. Because D(image) ~= D(image_30) ~= D(image_X), adding the rotated image to the training set is somehow redundant (in the feature space you are adding samples at very similar position).

Instead, in your configuration the robustness to rotation is not handled by HOG but by :
1/ data augmentation (adding the rotated images to the training set)
2/ the machine learning algorithm SVM.
In the feature space, for HOG : D(image) and D(image_X) are located in different positions in the feature space and the SVM learns to "put them" in the same class.

If you really want to test the invariance of HOG against rotation, don't add the rotated images to the training set, but keep them in the test set. Accuraccy should fall drastically.

Upvotes: 5

Related Questions