Reputation: 2928
I have python list like below
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
Now i need to stem it (each word) and get another list. How do i do that ?
Upvotes: 22
Views: 44446
Reputation: 617
from nltk.stem import PorterStemmer
ps = PorterStemmer()
list_stem = [ps.stem(word) for word in list]
Upvotes: 2
Reputation: 22661
You could use whoosh: (http://whoosh.readthedocs.io/)
from whoosh.analysis import CharsetFilter, StemmingAnalyzer
from whoosh import fields
from whoosh.support.charset import accent_map
my_analyzer = StemmingAnalyzer() | CharsetFilter(accent_map)
tokens = my_analyzer("hello you, comment ça va ?")
words = [token.text for token in tokens]
print(' '.join(words))
Upvotes: 1
Reputation: 5551
you can use NLTK :
from nltk.stem import PorterStemmer
ps = PorterStemmer()
final = [[ps.stem(token) for token in sentence.split(" ")] for sentence in documents]
NLTK has many features for IR Systems, check it
Upvotes: 3
Reputation: 88987
from stemming.porter2 import stem
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
documents = [[stem(word) for word in sentence.split(" ")] for sentence in documents]
What we are doing here is using a list comprehension to loop through each string inside the main list, splitting that into a list of words. Then we loop through that list, stemming each word as we go, returning the new list of stemmed words.
Please note I haven't tried this with stemming installed - I have taken that from the comments, and have never used it myself. This is, however, the basic concept for splitting the list into words. Note that this will produce a list of lists of words, keeping the original separation.
If do not want this separation, you can do:
documents = [stem(word) for sentence in documents for word in sentence.split(" ")]
Instead, which will leave you with one continuous list.
If you wish to join the words back together at the end, you can do:
documents = [" ".join(sentence) for sentence in documents]
or to do it in one line:
documents = [" ".join([stem(word) for word in sentence.split(" ")]) for sentence in documents]
Where keeping the sentence structure, or
documents = " ".join(documents)
Where ignoring it.
Upvotes: 42
Reputation: 10717
Alright. So, using the stemming package, you'd have something like this:
from stemming.porter2 import stem
from itertools import chain
def flatten(listOfLists):
"Flatten one level of nesting"
return list(chain.from_iterable(listOfLists))
def stemall(documents):
return flatten([ [ stem(word) for word in line.split(" ")] for line in documents ])
Upvotes: 4
Reputation: 181745
You might want to have a look at the NLTK (Natural Language ToolKit). It has a module nltk.stem which contains various different stemmers.
See also this question.
Upvotes: 7