Ibrahim Sufi
Ibrahim Sufi

Reputation: 21

How do you find which words a trained naive bayes classifier uses to make decisions?

I have created a Naive Bayes classifier that uses the text of tweets from different politicians to predict their party. I used the sklearn MultinomialNB implementation. Here is my implementation:

Senators_Vectorizer = CountVectorizer(decode_error= 'replace')
senator_counts = Senators_Vectorizer.fit_transform(senator_tweets['text'].values)
senator_targets = senator_tweets['party'].values


senator_counts_train, senator_counts_test, senator_targets_train, senator_targets_test = train_test_split(senator_counts, senator_targets, test_size = .1)

senator_party_clf = MultinomialNB()
senator_party_clf.fit(senator_counts_train, senator_targets_train)

How do I find the words that the Naive Bayes classifier is using to make prediction? Is there a way to find which words have the highest probability of being in Democrats'/Republicans' tweets?

I want the probabilities each word in the Senators_Vectorizer not the probability of a specific tweet being from a specific party.

Upvotes: 1

Views: 1060

Answers (1)

Venkatachalam
Venkatachalam

Reputation: 16966

Use the feature_log_prob_/coef_ attribute to get the probabilities for each feature.

From Documentation:

feature_log_prob_: ndarray of shape (n_classes, n_features).
Empirical log probability of features given a class, P(x_i|y).

coef_: ndarray of shape (n_classes, n_features)
Mirrors feature_log_prob_ for interpreting MultinomialNB as a linear model.

This tutorial would be help I guess.

Quick example to get the top features for each class:

categories = ['alt.atheism', 'talk.religion.misc',
              'comp.graphics', 'sci.space']

newsgroups_train = fetch_20newsgroups(subset='train',
                                     remove=('headers', 'footers', 'quotes'),
                                     categories=categories)
vectorizer = TfidfVectorizer(stop_words='english')
vectors = vectorizer.fit_transform(newsgroups_train.data)
clf = MultinomialNB(alpha=.01).fit(vectors, newsgroups_train.target)


import numpy as np
def show_top10(classifier, vectorizer, categories):
    feature_names = np.asarray(vectorizer.get_feature_names())
    for i, category in enumerate(categories):
        top10 = np.argsort(classifier.coef_[i])[-10:]
        print("%s: %s" % (category, " ".join(feature_names[top10])))

show_top10(clf, vectorizer, newsgroups_train.target_names)

output:

alt.atheism: islam does religion atheism say just think don people god
comp.graphics: windows does looking program know file image files thanks graphics
sci.space: earth think shuttle orbit moon just launch like nasa space
talk.religion.misc: objective think just bible don christians christian people Jesus god

Upvotes: 2

Related Questions