Shubham Singh
Shubham Singh

Reputation: 101

Optimizing Language Detection code and Lemmatization in Python

I have a data of amazon user reviews in JSON format which i am importing to pandas dataframe and using it to train a model for text classification. I am trying to preprocess the user review text before training a model with that data. I have two questions here:

1) I have written a code to detect it's language using Textblob library in Python which is working fine but consuming a lot of time. Please tell me if there can be a optimal approach.I am using Textblob library in python and the code is:

    from textblob import TextBlob
    def detect_language(text):
        if len(text)>3:
            r=TextBlob(text)
            lang = r.detect_language()
            return lang
    dataset['language']=dataset.reviewText.apply(lambda x: detect_language(x))

2) I want to lemmatize my words before training the model. But as lemmatization in NLTK will work properly if the we have parts-of-speech tagged with the words, I am trying it as follows but getting some error:

    from nltk import pos_tag
    from nltk.stem import WordNetLemmatizer
    text='my name is shubham'
    text=pos_tag(text.split())
    wl=WordNetLemmatizer()
    for i in text:
        print(wl.lemmatize(i))

Here i am getting pos tagged as:

    [('my', 'PRP$'), ('name', 'NN'), ('is', 'VBZ'), ('shubham', 'JJ')]

and while doing lemmatization i am getting error as:

    AttributeError: 'tuple' object has no attribute 'endswith'

Can you please suggest an efficient way to perform lemmatization. Here is my sample data on which i am performing language detection and lemmatization:

    overall reviewText
        5   Not much to write about here, but it does exac...
        5   The product does exactly as it should and is q...
        5   The primary job of this device is to block the...
        5   Nice windscreen protects my MXL mic and preven...
        5   This pop filter is great. It looks and perform...

Upvotes: 0

Views: 840

Answers (1)

alvas
alvas

Reputation: 122102

TL;DR

from nltk import pos_tag, word_tokenize
from nltk.stem import WordNetLemmatizer

wnl = WordNetLemmatizer()

def penn2morphy(penntag):
    """ Converts Penn Treebank tags to WordNet. """
    morphy_tag = {'NN':'n', 'JJ':'a',
                  'VB':'v', 'RB':'r'}
    try:
        return morphy_tag[penntag[:2]]
    except:
        return 'n' 

def lemmatize_sent(text): 
    # Text input is string, returns lowercased strings.
    return [wnl.lemmatize(word.lower(), pos=penn2morphy(tag)) 
            for word, tag in pos_tag(word_tokenize(text))]

To lemmatize a dataframe column of string.

df['lemmas'] = df['text'].apply(lemmatize_sent)

In Long

Read https://www.kaggle.com/alvations/basic-nlp-with-nltk

Upvotes: 1

Related Questions