mari
mari

Reputation: 167

Efficient implementation of words count in the several lists using Python

I have the list of comments in the following format:

Comments=[['hello world'], ['would', 'hard', 'press'],['find', 'place', 'less'']]

wordset={'hello','world','hard','would','press','find','place','less'}

I wish to have the table or dataframe which has wordset as index and the individual counts for each comment in Comments

I worked with the following code which achieves the required dataframe. And It is high time taking and I look for an efficient implementation. Since the corpus is large, this has a huge impact on the efficiency of our ranking algorithm.

result=pd.DataFrame()
        for comment in Comments:
            worddict_terms=dict.fromkeys(wordset,0)
            for items in comment:
                worddict_terms[items]+=1
                df_comment=pd.DataFrame.from_dict([worddict_terms])
            frames=[result,df_comment]        
            result = pd.concat(frames)

Comments_raw_terms=result.transpose()

The result we expect is:

        0   1   2
hello   1   0   0
world   1   0   0
would   0   1   0
press   0   1   0
find    0   0   1
place   0   0   1
less    0   0   1
hard    0   1   0

Upvotes: 3

Views: 66

Answers (2)

Aakash Goel
Aakash Goel

Reputation: 1030

I think your nested for loop is increasing complexity. I am writing code which replaces 2 for loops with single map function. I am writing code only up to part where for each comment in comments, you get the count_dictionary for "Hello" and "World". You, Please copy the remaining code of making table using pandas.

from collections import Counter
import funcy
from funcy import project
def fun(comment):
    wordset={'hello','world'}
    temp_dict_comment = Counter(comment)
    temp_dict_comment = dict(temp_dict_comment)
    final_dict = project(temp_dict_comment,wordset)
    print final_dict
Comments=[['hello', 'world'], ['would', 'hard', 'press'],['find', 'place', 'less', 'excitingit', 'wors', 'watch', 'paint', 'dri']]
map(fun,Comments)

This should help as it is only containing single map instead of 2 for loops.

Upvotes: 3

MaxU - stand with Ukraine
MaxU - stand with Ukraine

Reputation: 210882

Try this approach:

import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer

vect = CountVectorizer()

text = pd.Series(Comments).str.join(' ')
X = vect.fit_transform(text)

r = pd.DataFrame(X.toarray(), columns=vect.get_feature_names())

Result:

In [49]: r
Out[49]:
   find  hard  hello  less  place  press  world  would
0     0     0      1     0      0      0      1      0
1     0     1      0     0      0      1      0      1
2     1     0      0     1      1      0      0      0

In [50]: r.T
Out[50]:
       0  1  2
find   0  0  1
hard   0  1  0
hello  1  0  0
less   0  0  1
place  0  0  1
press  0  1  0
world  1  0  0
would  0  1  0

Pure Pandas solution:

In [61]: pd.get_dummies(text.str.split(expand=True), prefix_sep='', prefix='')
Out[61]:
   find  hello  would  hard  place  world  less  press
0     0      1      0     0      0      1     0      0
1     0      0      1     1      0      0     0      1
2     1      0      0     0      1      0     1      0

Upvotes: 2

Related Questions