Reputation: 1753
I am trying to Classify a sample using Naive Bayes. My sample size is 2.8million records, 90% of the records have Class Label(dependent variable) = "0" and the rest have it as "1". The distribution in the testing set is also the same(90% - 10%) The Naive Bayes Classifier labels the entire testing set to "0". How do I deal with this case? Are there any other Algorithms which can be implemented in such cases.
Upvotes: 1
Views: 1003
Reputation: 1690
Your problem may or may not be solved by using a better classifier. The issue here is that your problem is unbalanced. If the data is non-separable then 90% accuracy might represent good performance, which the classifier achieves by always making the same prediction. If this is not the behaviour you want, you should make use of a cost function or resample from your positives so that you have a more even number of positives.
Upvotes: 2
Reputation: 2302
Almost all classification methods actually don't return a binary result, but a propensity score (usually between 0 and 1) of how likely the given case falls within the category. Binary results are then created by picking a cut-off point, usually at .5.
When you want to identify rare cases using weak predictors any classification method may be unable to find cases with a propensity score higher than .5 resulting in all 0s as in your case.
There are 3 things you can do in such a situation:
Upvotes: 0
Reputation: 66815
There are dozens of classifiers, including:
most of which can handle class disproportions using some custom technique, for example in SVM it is a "class weighting" (avaliable in scikit-learn).
So why does NB fail? Naive Bayes is very Naive, it assumes independence of each feature, which is rarely the case, so it is just a simple idea to understand, but very weak classifier in general.
Upvotes: 0