Reputation: 672
I've struggled so much but still couldn't figure out how to use extra features alongside text features with FeatureUnion
in the scikit-learn pipeline.
I have a list of sentences and their labels to train a model and a list of sentences as test data. Then I try to add an extra feature (like the length of each sentence) to the bag words. For this I wrote a custom LengthTransformer
which returns a list of lengths and has same number of elements as my train list.
Then I combine that with the TfidfVectorizer
using FeatureUnion
but it just doesn't work.
What I've came up with so far is this:
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import LinearSVC
from sklearn.multiclass import OneVsRestClassifier
from sklearn import preprocessing
class LengthTransformer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
return self
def transform(self, X):
return [len(x) for x in X]
X_train = ["new york is a hell of a town",
"new york was originally dutch",
"the big apple is great",
"new york is also called the big apple",
"nyc is nice",
"people abbreviate new york city as nyc",
"the capital of great britain is london",
"london is in the uk",
"london is in england",
"london is in great britain",
"it rains a lot in london",
"london hosts the british museum",
"new york is great and so is london",
"i like london better than new york"]
y_train_text = [["new york"], ["new york"], ["new york"], ["new york"], ["new york"],
["new york"], ["london"], ["london"], ["london"], ["london"],
["london"], ["london"], ["london", "new york"], ["new york", "london"]]
X_test = ['nice day in nyc',
'welcome to london',
'london is rainy',
'it is raining in britian',
'it is raining in britian and the big apple',
'it is raining in britian and nyc',
'hello welcome to new york. enjoy it here and london too']
lb = preprocessing.MultiLabelBinarizer()
Y = lb.fit_transform(y_train_text)
classifier = Pipeline([
('feats', FeatureUnion([
('tfidf', TfidfVectorizer()),
('len', LengthTransformer())
])),
('clf', OneVsRestClassifier(LinearSVC()))
])
classifier.fit(X_train, Y)
predicted = classifier.predict(X_test)
all_labels = lb.inverse_transform(predicted)
for item, labels in zip(X_test, all_labels):
print('{} => {}'.format(item, ', '.join(labels)))
Upvotes: 2
Views: 1315
Reputation: 22248
LengthTransformer.transform return shape is wrong - it returns a scalar per input document, while transformers should return a feature vector per document. You can make it work by changing [len(x) for x in X]
to [[len(x)] for x in X]
in LengthTransformer.transform.
Upvotes: 5