Reza energy
Reza energy

Reputation: 135

pandas getting most frequent names from a column which has list of names

my dataframe is like this

star_rating  actors_list
0   9.3     [u'Tim Robbins', u'Morgan Freeman']
1   9.2     [u'Marlon Brando', u'Al Pacino', u'James Caan']
2   9.1     [u'Al Pacino', u'Robert De Niro']
3   9.0     [u'Christian Bale', u'Heath Ledger']
4   8.9     [u'John Travolta', u'Uma Thurman']

I want to extract the most frequent names in the actors_list column. I found this code. do you have a better suggestion? especially for big data.

import pandas as pd
df= pd.read_table (r'https://raw.githubusercontent.com/justmarkham/pandas-videos/master/data/imdb_1000.csv',sep=',')
df.actors_list.str.replace("(u\'|[\[\]]|\')",'').str.lower().str.split(',',expand=True).stack().value_counts()

expected output for (this data)

robert de niro    13
tom hanks         12
clint eastwood    11
johnny depp       10
al pacino         10
james stewart      9

Upvotes: 3

Views: 297

Answers (4)

cs95
cs95

Reputation: 402814

By my tests, it would be much faster to do the regex cleanup after counting.

from itertools import chain
import re

p = re.compile("""^u['"](.*)['"]$""")
ser = pd.Series(list(chain.from_iterable(
    x.title().split(', ') for x in df.actors_list.str[1:-1]))).value_counts()
ser.index = [p.sub(r"\1", x) for x in ser.index.tolist()]


ser.head()

Robert De Niro    18
Brad Pitt         14
Clint Eastwood    14
Tom Hanks         14
Al Pacino         13
dtype: int64

Upvotes: 4

Reza energy
Reza energy

Reputation: 135

according to this code I got below chart

enter image description here which

  • coldspeed's code is wen2()
  • Dark's code is wen4()
  • Mine code is wen1()
  • W-B's code is wen3()

Upvotes: 0

BENY
BENY

Reputation: 323316

I will using ast convert the list like to list

import ast 
df.actors_list=df.actors_list.apply(ast.literal_eval)
pd.DataFrame(df.actors_list.tolist()).melt().value.value_counts()

Upvotes: 3

Bharath M Shetty
Bharath M Shetty

Reputation: 30605

Its always better to go for plain python than depending on pandas since it consumes huge amount of memory if the list is large.

If the list is of size 1000, then the non 1000 length lists will have Nan's when you use expand = True which is a waste of memeory. Try this instead.

df = pd.concat([df]*1000) # For the sake of large df. 

%%timeit
df.actors_list.str.replace("(u\'|[\[\]]|\')",'').str.lower().str.split(',',expand=True).stack().value_counts()
10 loops, best of 3: 65.9 ms per loop

%%timeit     
df['actors_list'] = df['actors_list'].str.strip('[]').str.replace(', ',',').str.split(',')
10 loops, best of 3: 24.1 ms per loop

%%timeit
words = {}
for i in df['actors_list']:
    for w in i : 
        if w in words:
            words[w]+=1
        else:
            words[w]=1

100 loops, best of 3: 5.44 ms per loop

Upvotes: 3

Related Questions