Reputation:
I have a list of paragraphs i want to remove stopwords from all the paragraphs.
I first splitted the words then i checked the words with stopwords if not in stopwords append that word .it working for single list of paragraph but when try for whole bunch of paragraph it create list of all words.instead of group by that list
g=[]
h=[]
for i in f[0:2]:
word_token=npl.tokenize.word_tokenize(i)
for j in word_token:
if(j not in z):
g.append(j)
h.append(g)
Example
Y="'Take a low budget, inexperienced actors doubling as production staff\x97 as well as limited facilities\x97and you can\'t expect much more than "Time Chasers" gives you, but you can absolutely expect a lot less. This film represents a bunch of good natured friends and neighbors coming together to collaborate on an interesting project. If your cousin had been one of those involved, you would probably think to yourself, "ok, this movie is terrible... but a really good effort." For all the poorly delivered dialog and ham-fisted editing, "Time Chasers" has great scope and ambition... and one can imagine it was necessary to shoot every scene in only one or two takes. So, I\'m suggesting people cut "Time Chasers" some slack before they cut in the jugular. That said, I\'m not sure I can ever forgive the pseudo-old lady from the grocery store for the worst delivery every wrenched from the jaws of a problematic script.'"
z=set(npl.corpus.stopwords.words("english"))
x=[]
word_token=npl.tokenize.word_tokenize(y)
for i in word_token:
if(i not in z):
x.append(i)
print(np.array(x))
output
['Take' 'low' 'budget' ',' 'inexperienced' 'actors' 'doubling'
'production' 'staff\x97' 'well' 'limited' 'facilities\x97and' 'ca' "n't"
'expect' 'much' '``' 'Time' 'Chasers' "''" 'gives' ',' 'absolutely'
'expect' 'lot' 'less' '.' 'This' 'film' 'represents' 'bunch' 'good'
'natured' 'friends' 'neighbors' 'coming' 'together' 'collaborate'
'interesting' 'project' '.' 'If' 'cousin' 'one' 'involved' ',' 'would'
'probably' 'think' ',' '``' 'ok' ',' 'movie' 'terrible' '...' 'really'
'good' 'effort' '.' "''" 'For' 'poorly' 'delivered' 'dialog' 'ham-fisted'
'editing' ',' '``' 'Time' 'Chasers' "''" 'great' 'scope' 'ambition' '...'
'one' 'imagine' 'necessary' 'shoot' 'every' 'scene' 'one' 'two' 'takes'
'.' 'So' ',' 'I' "'m" 'suggesting' 'people' 'cut' '``' 'Time' 'Chasers'
"''" 'slack' 'cut' 'jugular' '.' 'That' 'said' ',' 'I' "'m" 'sure' 'I'
'ever' 'forgive' 'pseudo-old' 'lady' 'grocery' 'store' 'worst' 'delivery'
'every' 'wrenched' 'jaws' 'problematic' 'script' '.']
Like that.i want this same output for list of paragraph
Upvotes: 1
Views: 63
Reputation: 6649
Given a list:
doc_set = ['my name is omprakash', 'my name is rajesh']
Do:
from nltk.tokenize import RegexpTokenizer
from nltk.corpus import stopwords
tokenizer = RegexpTokenizer(r'\w+')
en_stop = set(stopwords.words('english'))
cleaned_texts = []
for i in doc_set:
tokens = tokenizer.tokenize(i)
stopped_tokens = [i for i in tokens if not i in en_stop]
cleaned_texts.append(stopped_tokens)
Output:
[['name', 'omprakash'], ['name', 'rajesh']]
If you put them to a pandas dataframe, you can see:
import pandas as pd
df = pd.DataFrame()
df['unclean_text'] = doc_set
df['clean_text'] = cleaned_texts
Output:
text clean
0 my name is omprakash [name, omprakash]
1 my name is rajesh [name, rajesh]
PS: 'my' is a stopword and hence it is excluded
Upvotes: 1