Arkham
Arkham

Reputation: 69

NLTK Classifier giving only negative as answer in Sentiment Analysis

I am doing sentiment analysis using NLTK taking the inbuilt corpora movie_reviews for training and everytime I am getting neg as the result.

My code :

import nltk
import random
import pickle
from nltk.corpus import movie_reviews
from os.path import exists
from nltk.classify import apply_features
from nltk.tokenize import word_tokenize, sent_tokenize

documents = [(list(movie_reviews.words(fileid)), category)
             for category in movie_reviews.categories()
             for fileid in movie_reviews.fileids(category)]

all_words = []
for w in movie_reviews.words():
    all_words.append(w.lower())
all_words = nltk.FreqDist(all_words)
word_features = list(all_words.keys())
print(word_features)

def find_features(document):
    words = set(document)
    features = {}
    for w in word_features:
        features[w] = (w in words)
    return features

featuresets = [(find_features(rev), category) for (rev, category) in documents]
numtrain = int(len(documents) * 90 / 100)
training_set = apply_features(find_features, documents[:numtrain])
testing_set = apply_features(find_features, documents[numtrain:])

classifier = nltk.NaiveBayesClassifier.train(training_set)
classifier.show_most_informative_features(15)

Example_Text = " avoids annual conveys vocal thematic doubts fascination slip avoids outstanding thematic astounding seamless"

doc = word_tokenize(Example_Text.lower())
featurized_doc = {i:(i in doc) for i in word_features} 
tagged_label = classifier.classify(featurized_doc)
print(tagged_label)

Here I am using NaiveBayes Classifier where I am training the data with the movie_reviews corpora and then use this trained classifier to test the sentiment of my Example_test.

Now as you can see my Example_Text, it has some random words. When I do classifier.show_most_informative_features(15), it gives me a list of 15 words that has the highest ratio of either being positive or negatice. I chose the positive words shown in this list.

Most Informative Features
                  avoids = True              pos : neg    =     12.1 : 1.0
               insulting = True              neg : pos    =     10.8 : 1.0
               atrocious = True              neg : pos    =     10.6 : 1.0
             outstanding = True              pos : neg    =     10.2 : 1.0
                seamless = True              pos : neg    =     10.1 : 1.0
                thematic = True              pos : neg    =     10.1 : 1.0
              astounding = True              pos : neg    =     10.1 : 1.0
                    3000 = True              neg : pos    =      9.9 : 1.0
                  hudson = True              neg : pos    =      9.9 : 1.0
               ludicrous = True              neg : pos    =      9.8 : 1.0
                   dread = True              pos : neg    =      9.5 : 1.0
                   vocal = True              pos : neg    =      9.5 : 1.0
                 conveys = True              pos : neg    =      9.5 : 1.0
                  annual = True              pos : neg    =      9.5 : 1.0
                    slip = True              pos : neg    =      9.5 : 1.0

So why don't I get pos as the result, why is it that always I get neg even when the classifier has been trained properly?

Upvotes: 2

Views: 721

Answers (1)

akornilo
akornilo

Reputation: 91

The problem is that you are including all the words as features, and the features of the form 'word:False' create a lot of extra noise which drowns out these positive features. I looked at the two log probabilities and they are fairly similar: -812 vs -808. In this kind of problem, it is generally appropriate to use only word:True style features, because all the other ones will only add noise.

I copied your code, but modified the last three lines as follows:

featurized_doc = {c:True for c in Example_Text.split()}
tagged_label = classifier.classify(featurized_doc)
print(tagged_label)

and got the output 'pos'

Upvotes: 2

Related Questions