Reputation: 678
I figured out how to use tfidf schema to capture distribution of the words along the document. However, I want to create vocabulary of top frequent and least frequent words for list of sentences.
Here is some part of text preprocessing:
print(my.df) ->
(17298, 2)
print(df.columns) ->
Index(['screen_name', 'text'], dtype='object')
txt = re.sub(r"[^\w\s]","",txt)
txt = re.sub(r"@([A-Z-a-z0-9_]+)", "", txt)
tokens = nltk.word_tokenize(txt)
token_lemmetized = [lemmatizer.lemmatize(token).lower() for token in tokens]
df['text'] = df['text'].apply(lambda x: process(x))
then this is my second attempt:
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
import string
stop = set(stopwords.words('english'))
df['text'] = df['text'].apply(lambda x: [item for item in x if item not in stop])
all_words = list(chain.from_iterable(df['text']))
for i in all_words:
x=Counter(df['text'][i])
res= [word for word, count in x.items() if count == 1]
print(res)
in above approach I want to create most frequent and least frequent words from list of sentences, but my attempt didn't produce that outuput? what should I do? any elegant way to make this happen? any idea? can anyone give me possible idea to make this happen? Thanks
example data snippets :
here is data that I used and file can be found safely here: example data
sample input and output:
inputList = {"RT @GOPconvention: #Oregon votes today. That means 62 days until the @GOPconvention!", "RT @DWStweets: The choice for 2016 is clear: We need another Democrat in the White House. #DemDebate #WeAreDemocrats ", "Trump's calling for trillion dollar tax cuts for Wall Street.", From Chatham Town Council to Congress, @RepRobertHurt has made a strong mark on his community. Proud of our work together on behalf of VA!}
sample output of tokens
['rt', 'gopconvention', 'oregon', 'vote', 'today', 'that', 'mean', '62', 'day', 'until', 'gopconvention', 'http', 't', 'co', 'ooh9fvb7qs']
output:
I want to create vocabulary for most frequent words and least frequent words from give data. any idea to get this done? Thanks
Upvotes: 0
Views: 1627
Reputation: 1548
collections.Counter()
can do this for you. I couldn't get to your data link, but copying and pasting the text you posted as an example, here's how it could be done:
>>> import collections
>>> s = "in above approach I want to create most frequent and least frequent
words from list of sentences, but my attempt didn't produce that outuput?
what should I do? any elegant way to make this happen? any idea? can anyone
give me possible idea to make this happen? Thanks"
>>> c = dict(collections.Counter(s.split()))
>>> c
{'in': 1, 'above': 1, 'approach': 1, 'I': 2, 'want': 1, 'to': 3, 'create': 1,
'most': 1, 'frequent': 2, 'and': 1, 'least': 1, 'words': 1, 'from': 1,
'list': 1, 'of': 1, 'sentences,': 1, 'but': 1, 'my': 1, 'attempt': 1,
"didn't": 1, 'produce': 1, 'that': 1, 'outuput?': 1, 'what': 1, 'should': 1,
'do?': 1, 'any': 2, 'elegant': 1, 'way': 1, 'make': 2, 'this': 2, 'happen?':
2, 'idea?': 1, 'can': 1, 'anyone': 1, 'give': 1, 'me': 1, 'possible': 1,
'idea': 1, 'Thanks': 1}
>>> maxval = max(c.values())
>>> print([word for word in c if c[word] == maxval])
['to']
You'll want to strip out punctuation marks and the like first; otherwise happen
and happen?
for example get counted as two different words. But you'll notice that c
here is a dictionary where the keys are words and the values are how many times the word shows up in the string.
EDIT: Here's something that will work across a list of multiple Tweets like you have. You can use a regular expression to first simplify each Tweet to all lower-case, no punctuation marks, etc.
from collections import Counter
import re
fakenews = ["RT @GOPconvention: #Oregon votes today. That means 62 days until the @GOPconvention!",
"RT @DWStweets: The choice for 2016 is clear: We need another Democrat in the White House. #DemDebate #WeAreDemocrats ",
"Trump's calling for trillion dollar tax cuts for Wall Street.",
"From Chatham Town Council to Congress, @RepRobertHurt has made a strong mark on his community. Proud of our work together on behalf of VA!"]
big_dict = {}
for tweet in fakenews:
# Strip out any non-alphanumeric, non-whitespaces
pattern = re.compile('([^\s\w]|_)+')
tweet_simplified = pattern.sub('', tweet).lower()
# Get the word count for this Tweet, then add it to the main dictionary
word_count = dict(Counter(tweet_simplified.split()))
for word in word_count:
if word in big_dict:
big_dict[word] += word_count[word]
else:
big_dict[word] = word_count[word]
# Start with the most frequently used words, and count down.
maxval = max(big_dict.values())
print("Word frequency:")
for i in range(maxval,0,-1):
words = [w for w in big_dict if big_dict[w] == i]
print("%d - %s" % (i, ', '.join(words)))
Output:
Word frequency:
3 - the, for
2 - rt, gopconvention, on, of
1 - oregon, votes, today, that, means, 62, days, until, dwstweets, choice, 2016, is, clear, we, need, another, democrat, in, white, house, demdebate, wearedemocrats, trumps, calling, trillion, dollar, tax, cuts, wall, street, from, chatham, town, council, to, congress, reproberthurt, has, made, a, strong, mark, his, community, proud, our, work, together, behalf, va
Upvotes: 4