minks
minks

Reputation: 3039

Lemmatization of a list of words

So I have a list of words in a text file. I want to perform lemmatization on them to remove words which have the same meaning but are in different tenses. Like try, tried etc. When I do this, I keep getting an error like TypeError: unhashable type: 'list'

    results=[]
    with open('/Users/xyz/Documents/something5.txt', 'r') as f:
       for line in f:
          results.append(line.strip().split())

    lemma= WordNetLemmatizer()

    lem=[]

    for r in results:
       lem.append(lemma.lemmatize(r))

    with open("lem.txt","w") as t:
      for item in lem:
        print>>t, item

How do I lemmatize words which are already tokens?

Upvotes: 4

Views: 15601

Answers (2)

Ashok Kumar Jayaraman
Ashok Kumar Jayaraman

Reputation: 3095

Open a text file and and read lists as results as shown below
fo = open(filename)
results1 = fo.readlines()

results1
['I have a list of words in a text file', ' \n I want to perform lemmatization on them to remove words which have the same meaning but are in different tenses', '']

# Tokenize lists

results2 = [line.split() for line in results1]

# Remove empty lists

results2 = [ x for x in results2 if x != []]

# Lemmatize each word from a list using WordNetLemmatizer

from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
lemma_list_of_words = []
for i in range(0, len(results2)):
     l1 = results2[i]
     l2 = ' '.join([lemmatizer.lemmatize(word) for word in l1])
     lemma_list_of_words.append(l2)
lemma_list_of_words
['I have a list of word in a text file', 'I want to perform lemmatization on them to remove word which have the same meaning but are in different tense']

Please look at the lemmatized difference between lemma_list_of_words and results1.

Upvotes: 1

Mike Robins
Mike Robins

Reputation: 1773

The method WordNetLemmatizer.lemmatize is probably expecting a string but you are passing it a list of strings. This is giving you the TypeError exception.

The result of line.split() is a list of strings which you are appending as a list to results i.e. a list of lists.

You want to use results.extend(line.strip().split())

results = []
with open('/Users/xyz/Documents/something5.txt', 'r') as f:
    for line in f:
        results.extend(line.strip().split())

lemma = WordNetLemmatizer()

lem = map(lemma.lemmatize, results)

with open("lem.txt", "w") as t:
    for item in lem:
        print >> t, item

or refactored without the intermediate results list

def words(fname):
    with open(fname, 'r') as document:
        for line in document:
            for word in line.strip().split():
                yield word

lemma = WordNetLemmatizer()
lem = map(lemma.lemmatize, words('/Users/xyz/Documents/something5.txt'))

Upvotes: 5

Related Questions