Schweigerama
Schweigerama

Reputation: 109

Python scikit-learn Predictionfail

I'm new to Python and Machine Learning. I try to implement a simple Machine Learning script to predict the Topic of a Text, e.g. Texts about Barack Obama should be Mapped to Politicians.

I think i make the right moves to do that, but im not 100% sure so i ask you guys.

First of all here is my little script:

#imports
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
#dictionary for mapping the targets
categories_dict = {'0' : 'politiker','1' : 'nonprofit org'}

import glob
#get filenames from docs
filepaths = glob.glob('Data/*.txt')
print(filepaths)

docs = []

for path in filepaths:
doc = open(path,'r')
docs.append(doc.read())
#print docs


count_vect = CountVectorizer()
#train Data
X_train_count = count_vect.fit_transform(docs)
#print X_train_count.shape

#tfidf transformation (occurences to frequencys)
tfdif_transform = TfidfTransformer()
X_train_tfidf = tfdif_transform.fit_transform(X_train_count)

#get the categories you want to predict in a set, these must be in the order the train        docs are!
categories = ['0','0','0','1','1']
clf = MultinomialNB().fit(X_train_tfidf,categories)

#try to predict
to_predict = ['Barack Obama is the President of the United States','Greenpeace']

#transform(not fit_transform) the new data you want to predict
X_pred_counts = count_vect.transform(to_predict)
X_pred_tfidf = tfdif_transform.transform(X_pred_counts)
print X_pred_tfidf

#predict
predicted = clf.predict(X_pred_tfidf)

for doc,category in zip(to_predict,predicted):
    print('%r => %s' %(doc,categories_dict[category]))

Im sure about the general Workflow that is required to use this, but im not sure how i map the categories to the docs i use to train the classifier. I know that they must be in correct order and i think i got that but it doesn't output the right category.

Is that because my Documents i use to Train the Classifier are bad, or do i make a certain mistake im not aware of?

He predicts that both new Texts are about Target 0 (Politicians)

Thanks in advance.

Upvotes: 1

Views: 156

Answers (1)

elyase
elyase

Reputation: 40963

It looks like the model hyper parameters are not rightly tuned. It is difficult to make conclusions with so little data but if you use:

model = MultinomialNB(0.5).fit(X, y)
# or
model = LogisticRegression().fit(X, y)

you will get the expected results, at least for words like "Greenpeace", "Obama", "President" which are so obviously correlated with its corresponding class. I took a quick look at the coefficients of the model and it seems to be doing the right thing.

For a more sophisticated approach to topic modeling I recommend you take a look at gensim.

Upvotes: 1

Related Questions