Reputation: 7380
Im trying to use scikit-learn for a classification task. My code extracts features from the data, and stores them in a dictionary like so:
feature_dict['feature_name_1'] = feature_1
feature_dict['feature_name_2'] = feature_2
when I split the data in order to test it using sklearn.cross_validation
everything works as it should. The problem Im having is when the test data is a new set, not part of the learning set (although it has the same exact features for each sample). after I fit the classifier on the learning set, when I try to call clf.predict
I get this error:
ValueError: X has different number of features than during model fitting.
I am assuming this has to do with this (out of the DictVectorizer docs):
Named features not encountered during fit or fit_transform will be silently ignored.
DictVectorizer
has removed some of the features I guess... How do I disable/work around this feature?
Thanks
=== EDIT ===
The problem was as larsMans suggested that I was fitting the DictVectorizer twice.
Upvotes: 0
Views: 1713
Reputation: 28758
You should use fit_transform
on the training set, and only transform
on the test set.
Upvotes: 5
Reputation: 7325
Are you making sure to call the previously built scaler and selector transforms on the test data?
scaler = preprocessing.StandardScaler().fit(trainingData)
selector = SelectPercentile(f_classif, percentile=90)
selector.fit(scaler.transform(trainingData), labelsTrain)
...
...
predicted = clf.predict(selector.transform(scaler.transform(testingData)))#
Upvotes: 0