Arsalan
Arsalan

Reputation: 373

iterate over rows in pandas and count unique hashtags

I have a csv file containing thousands of tweets. Lets say the data is as follows:

Tweet_id   hashtags_in_the_tweet

Tweet_1    [trump, clinton]
Tweet_2    [trump, sanders]
Tweet_3    [politics, news]
Tweet_4    [news, trump]
Tweet_5    [flower, day]
Tweet_6    [trump, impeach]

as you can see, the data contains tweet_id and the hashtags in each tweet. What I want to do is to go to all the rows, and at last give me something like value count:

Hashtag    count
trump      4
news       2
clinton    1
sanders    1
politics   1
flower     1
obama      1
impeach    1

Considering that the csv file contains 1 million rows (1 million tweets), what is the best way to do this?

Upvotes: 1

Views: 689

Answers (5)

Arsalan
Arsalan

Reputation: 373

So all the answers above were helpful, but didn't actually work! The problem with my data is: 1)the value of 'hashtags' filed for some tweets are nan or []. 2)The value of 'hashtags' field in the dataframe is one string! the answers above assumed that the values of the hashtags are lists of hashtag, e.g. ['trump', 'clinton'], while it actually is only an str: '[trump, clinton]'. So I added some lines to @jpp 's answer:

#deleting rows with nan or '[]' values for in column hashtags 
df = df[df.hashtags != '[]']
df.dropna(subset=['hashtags'], inplace=True)

#changing each hashtag from str to list
df.hashtags = df.hashtags.str.strip('[')
df.hashtags = df.hashtags.str.strip(']')
df.hashtags = df.hashtags.str.split(', ')

from collections import Counter
from itertools import chain

c = Counter(chain.from_iterable(df['hashtags'].values.tolist()))

res = pd.DataFrame(c.most_common())\
        .set_axis(['Hashtag', 'count'], axis=1, inplace=False)

print(res)

Upvotes: 1

jpp
jpp

Reputation: 164683

Counter + chain

Pandas methods aren't designed for series of lists. No vectorised approach exists. One way is to use collections.Counter from the standard library:

from collections import Counter
from itertools import chain

c = Counter(chain.from_iterable(df['hashtags_in_the_tweet'].values.tolist()))

res = pd.DataFrame(c.most_common())\
        .set_axis(['Hashtag', 'count'], axis=1, inplace=False)

print(res)

    Hashtag  count
0     trump      4
1      news      2
2   clinton      1
3   sanders      1
4  politics      1
5    flower      1
6       day      1
7   impeach      1

Setup

df = pd.DataFrame({'Tweet_id': [f'Tweet_{i}' for i in range(1, 7)],
                   'hashtags_in_the_tweet': [['trump', 'clinton'], ['trump', 'sanders'], ['politics', 'news'],
                                             ['news', 'trump'], ['flower', 'day'], ['trump', 'impeach']]})

print(df)

  Tweet_id hashtags_in_the_tweet
0  Tweet_1      [trump, clinton]
1  Tweet_2      [trump, sanders]
2  Tweet_3      [politics, news]
3  Tweet_4         [news, trump]
4  Tweet_5         [flower, day]
5  Tweet_6      [trump, impeach]

Upvotes: 2

Abhi
Abhi

Reputation: 4233

One alternative with np.hstack and convert to pd.Series then use value_counts.

import numpy as np

df = pd.Series(np.hstack(df['hashtags_in_the_tweet'])).value_counts().to_frame('count')

df = df.rename_axis('Hashtag').reset_index()

print (df)

    Hashtag  count
0     trump      4
1      news      2
2   sanders      1
3   impeach      1
4   clinton      1
5    flower      1
6  politics      1
7       day      1

Upvotes: 2

BENY
BENY

Reputation: 323266

Using np.unique

v,c=np.unique(np.concatenate(df.hashtags_in_the_tweet.values),return_counts=True)

#pd.DataFrame({'Hashtag':v,'Count':c})

Even the problem look different , but still is related unnesting problem

unnesting(df,['hashtags_in_the_tweet'])['hashtags_in_the_tweet'].value_counts()

Upvotes: 2

arra
arra

Reputation: 136

Sounds like you want something like collections.Counter, which you might use like this...

from collections import Counter
from functools import reduce 
import operator
import pandas as pd 

fold = lambda f, acc, xs: reduce(f, xs, acc)
df = pd.DataFrame({'Tweet_id': ['Tweet_%s'%i for i in range(1, 7)],
                   'hashtags':[['t', 'c'], ['t', 's'], 
                               ['p','n'], ['n', 't'], 
                               ['f', 'd'], ['t', 'i', 'c']]})
fold(operator.add, Counter(), [Counter(x) for x in df.hashtags.values])

which gives you,

Counter({'c': 2, 'd': 1, 'f': 1, 'i': 1, 'n': 2, 'p': 1, 's': 1, 't': 4})

Edit: I think jpp's answer will be quite a bit faster. If time really is a constraint, I would avoid reading the data into a DataFrame in the first place. I don't know what the raw csv file looks like, but reading it as a text file by lines, ignoring the first token, and feeding the rest into a Counter may end up being quite a bit faster.

Upvotes: 1

Related Questions