Reputation: 41
I am using the classical SIFT - BOW - SVM for image classification. My classifiers are created using the 1vsAll paradigm.
Let's say that I currently have 100 classes. Later, I would like to add new classes OR I would like to improve the recognition for some specific classes using additional training sets.
What would be the best approach to do it ? Of course, the best way would be to re-execute every steps of the training stage.
But would it make any sense to only compute the additional (or modified) classes using the same vocabulary as the previous model, in order avoid to recompute a new vocabulary and train again ALL the classes ?
Upvotes: 1
Views: 1016
Reputation: 66795
In short - no. If you add new class it should be added to each of the "old" classifiers so "one vs. all" still makes sense. If you assume that new classes can appear with time consider using one class classifiers instead, such as one-class SVM. This way once you get new samples for a particular class you only retrain a particular model, or add a completely new one to the system.
Furthermore, for large number of classes, 1 vs all SVM works quite badly, and one-class approach is usually much better.
Upvotes: 4