ChamingaD
ChamingaD

Reputation: 2928

How to stem words in python list?

I have python list like below

documents = ["Human machine interface for lab abc computer applications",
             "A survey of user opinion of computer system response time",
             "The EPS user interface management system",
             "System and human system engineering testing of EPS",
             "Relation of user perceived response time to error measurement",
             "The generation of random binary unordered trees",
             "The intersection graph of paths in trees",
             "Graph minors IV Widths of trees and well quasi ordering",
             "Graph minors A survey"]

Now i need to stem it (each word) and get another list. How do i do that ?

Upvotes: 22

Views: 44446

Answers (7)

9113303
9113303

Reputation: 872

You can use either PorterStemmer or LancasterStemmer for stemming.

Upvotes: 0

Gigi
Gigi

Reputation: 617

from nltk.stem import PorterStemmer
ps = PorterStemmer()
list_stem = [ps.stem(word) for word in list]

Upvotes: 2

Thomas Decaux
Thomas Decaux

Reputation: 22661

You could use whoosh: (http://whoosh.readthedocs.io/)

from whoosh.analysis import CharsetFilter, StemmingAnalyzer
from whoosh import fields
from whoosh.support.charset import accent_map

my_analyzer = StemmingAnalyzer() | CharsetFilter(accent_map)

tokens = my_analyzer("hello you, comment ça va ?")
words = [token.text for token in tokens]

print(' '.join(words))

Upvotes: 1

Arash Hatami
Arash Hatami

Reputation: 5551

you can use NLTK :

from nltk.stem import PorterStemmer


ps = PorterStemmer()
final = [[ps.stem(token) for token in sentence.split(" ")] for sentence in documents]

NLTK has many features for IR Systems, check it

Upvotes: 3

Gareth Latty
Gareth Latty

Reputation: 88987

from stemming.porter2 import stem

documents = ["Human machine interface for lab abc computer applications",
             "A survey of user opinion of computer system response time",
             "The EPS user interface management system",
             "System and human system engineering testing of EPS",
             "Relation of user perceived response time to error measurement",
             "The generation of random binary unordered trees",
             "The intersection graph of paths in trees",
             "Graph minors IV Widths of trees and well quasi ordering",
             "Graph minors A survey"]

documents = [[stem(word) for word in sentence.split(" ")] for sentence in documents]

What we are doing here is using a list comprehension to loop through each string inside the main list, splitting that into a list of words. Then we loop through that list, stemming each word as we go, returning the new list of stemmed words.

Please note I haven't tried this with stemming installed - I have taken that from the comments, and have never used it myself. This is, however, the basic concept for splitting the list into words. Note that this will produce a list of lists of words, keeping the original separation.

If do not want this separation, you can do:

documents = [stem(word) for sentence in documents for word in sentence.split(" ")]

Instead, which will leave you with one continuous list.

If you wish to join the words back together at the end, you can do:

documents = [" ".join(sentence) for sentence in documents]

or to do it in one line:

documents = [" ".join([stem(word) for word in sentence.split(" ")]) for sentence in documents]

Where keeping the sentence structure, or

documents = " ".join(documents)

Where ignoring it.

Upvotes: 42

cha0site
cha0site

Reputation: 10717

Alright. So, using the stemming package, you'd have something like this:

from stemming.porter2 import stem
from itertools import chain

def flatten(listOfLists):
    "Flatten one level of nesting"
    return list(chain.from_iterable(listOfLists))

def stemall(documents):
    return flatten([ [ stem(word) for word in line.split(" ")] for line in documents ])

Upvotes: 4

Thomas
Thomas

Reputation: 181745

You might want to have a look at the NLTK (Natural Language ToolKit). It has a module nltk.stem which contains various different stemmers.

See also this question.

Upvotes: 7

Related Questions