WeaselFox
WeaselFox

Reputation: 7380

how to force scikit-learn DictVectorizer not to discard features?

Im trying to use scikit-learn for a classification task. My code extracts features from the data, and stores them in a dictionary like so:

feature_dict['feature_name_1'] = feature_1
feature_dict['feature_name_2'] = feature_2

when I split the data in order to test it using sklearn.cross_validation everything works as it should. The problem Im having is when the test data is a new set, not part of the learning set (although it has the same exact features for each sample). after I fit the classifier on the learning set, when I try to call clf.predict I get this error:

ValueError: X has different number of features than during model fitting.

I am assuming this has to do with this (out of the DictVectorizer docs):

Named features not encountered during fit or fit_transform will be silently ignored.

DictVectorizer has removed some of the features I guess... How do I disable/work around this feature?

Thanks

=== EDIT ===

The problem was as larsMans suggested that I was fitting the DictVectorizer twice.

Upvotes: 0

Views: 1713

Answers (2)

Andreas Mueller
Andreas Mueller

Reputation: 28758

You should use fit_transform on the training set, and only transform on the test set.

Upvotes: 5

Paul
Paul

Reputation: 7325

Are you making sure to call the previously built scaler and selector transforms on the test data?

scaler = preprocessing.StandardScaler().fit(trainingData)
selector = SelectPercentile(f_classif, percentile=90)
selector.fit(scaler.transform(trainingData), labelsTrain)
...
...
predicted = clf.predict(selector.transform(scaler.transform(testingData)))#

Upvotes: 0

Related Questions