Reputation:
The below code represnets sklearn multinomial naive bayes.
import numpy as np
from sklearn.naive_bayes import MultinomialNB
X = np.random.randint(5, size=(10, 100))
y=np.random.randint(2,size=(10,))
clf = MultinomialNB()
clf.fit(X, y)
Then I want to find out the important features in my model and in sklearn documentation we have two parameters namely.
feature_log_prob_ : array, shape (n_classes, n_features)
Empirical log probability of features given a class, P(x_i|y).
coef_ : array, shape (n_classes, n_features)
Mirrors feature_log_prob_ for interpreting MultinomialNB as a linear model.
Then If I try to print both attributes
print(clf.feature_log_prob_.shape) // giving (2,100)
print(clf.coef_.shape) // giving (1,100)
But when my classes are more than two then both attributes giving the same results.
what is the difference between two above attributes?
Upvotes: 3
Views: 2548
Reputation: 11
In standard binary classification coef_
gives you the probability of observing the "success" category. In multinomial case, coef_
returns probabilities of observing each of the outcomes, i.e for all classes it will return prob score.
Upvotes: 1