Reputation: 4630
I'm playing with the different classifiers and vectorizers that scikit-learn provide so let's say I have the following:
training = [["this was a good movie, 'POS'"],
["this was a bad movie, 'NEG'"],
["i went to the movies, 'NEU'"],
["this movie was very exiting it was great, 'POS'"],
["this is a boring film, 'NEG'"]
,........................,
[" N-sentence, 'LABEL'"]]
#Where each element of the list is another list that have documents, then.
splitted = [#remove the tags from training]
from sklearn.feature_extraction.text import HashingVectorizer
X = HashingVectorizer(
tokenizer=lambda doc: doc, lowercase=False).fit_transform(splitted)
print X.toarray()
Then I have this vector representation:
[[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]]
The problem with this is that I dont know if I vectorized right the corpus, then:
#This is the test corpus:
test = ["I don't like this movie it sucks it doesn't liked me"]
#I vectorize the corpus with hashing vectorizer
Y = HashingVectorizer(
tokenizer=lambda doc: doc, lowercase=False).fit_transform(test)
Then I print the Y
:
[[ 0. 0. 0. ..., 0. 0. 0.]]
Then
y = [x[-1]for x in training]
#import SVM and classify
from sklearn.svm import SVC
svm = SVC()
svm.fit(X, y)
result = svm.predict(X)
print "\nThe opinion is:\n",result
And here's the problem, I got the following insted of [NEG] which is actually the right prediction:
["this was a good movie, 'POS'"]
I guess I am not vectorizing right training
or y
target is wrong, could anybody help me to understand what's happening and how should I vectorize training
in order to have a right prediction?
Upvotes: 2
Views: 219
Reputation: 40963
I will leave it to you to get the training data into the expected format:
training = ["this was a good movie",
"this was a bad movie",
"i went to the movies",
"this movie was very exiting it was great",
"this is a boring film"]
labels = ['POS', 'NEG', 'NEU', 'POS', 'NEG']
Feature extraction
>>> from sklearn.feature_extraction.text import HashingVectorizer
>>> vect = HashingVectorizer(n_features=5, stop_words='english', non_negative=True)
>>> X_train = vect.fit_transform(training)
>>> X_train.toarray()
[[ 0. 0.70710678 0. 0. 0.70710678]
[ 0.70710678 0.70710678 0. 0. 0. ]
[ 0. 0. 0. 0. 0. ]
[ 0. 0.89442719 0. 0.4472136 0. ]
[ 1. 0. 0. 0. 0. ]]
With bigger corpus you should increase n_features
to avoid collisions, I used 5 so that the resulting matrix can be visualized. Also note that I used stop_words='english'
, I think with so few examples it is important to get rid of stopwords, otherwise you could confuse the classifier.
Model training
from sklearn.svm import SVC
model = SVC()
model.fit(X_train, labels)
Prediction
>>> test = ["I don't like this movie it sucks it doesn't liked me"]
>>> X_pred = vect.transform(test)
>>> model.predict(X_pred)
['NEG']
>>> test = ["I think it was a good movie"]
>>> X_pred = vect.transform(test)
>>> model.predict(X_pred)
['POS']
EDIT: Note that the correct classification of the first test example is just a fortunate coincidence as I don't see any word that could have been learned from the training set as negative. In the second example the word good
could have triggered the positive classification.
Upvotes: 2