Donbeo
Donbeo

Reputation: 17617

fastest way to get nlargest by key in pandas

given a dataframe containing 3 columns: ['User', 'Feature', 'Value'] . I need to get the TOP-N Feature for each user.

I have tried few options but all of them seem to be too slow. The pandas nlargest method seems actually to be slower than a more handmade approach as seen in the following code:

import pandas as pd                                                                                                                                                                                         
import numpy as np                                                                                                                                                                                          
import time                                                                                                                                                                                                 

np.random.seed(0)                                                                                                                                                                                           

max_rows = 1000000                                                                                                                                                                                          
n_users = 10000                                                                                                                                                                                             
max_entries = 100                                                                                                                                                                                           

df = pd.DataFrame()                                                                                                                                                                                         
df['Value'] = [i *  0.1 for i in range(max_rows)]                                                                                                                                                           
df['User'] = ['u%i'%i  for i in np.random.randint(0, n_users, size=max_rows)]                                                                                                                               
df['Feature'] = ['f%i'%i for i in range(max_rows)]                                                                                                                                                          

# PART 1                                                                                                                                                                                                    
t0 = time.time()                                                                                                                                                                                            
dfg = df.groupby(['User', 'Feature'], sort=False, as_index=True).sum()                                                                                                                                      
t1 = time.time()                                                                                                                                                                                            
print "first groupby takes: ", t1-t0,  "for nrows=", max_rows                                                                                                                                               


# PART 2                                                                                                                                                                                                    
#Option1                                                                                                                                                                                                    
dfg2 = dfg.groupby(dfg.index.get_level_values(0),                                                                                                                                                           
                   group_keys=False,                                                                                                                                                                        
                   sort=False).apply(lambda x: x.sort_values('Value', ascending=False).\                                                                                                                    
                                     iloc[:max_entries])                                                                                                                                                    
t2 = time.time()                                                                                                                                                                                            
print "second groupby takes:", t2-t1                                                                                                                                                                        


# PART 2                                                                                                                                                                                                    
#Option2                                                                                                                                                                                                    
dfg = dfg.reset_index(level='Feature')                                                                                                                                                                      
dfg3 = dfg.groupby(dfg.index.get_level_values(0), sort=False, group_keys=False).\                                                                                                                           
       apply(lambda x: x.nlargest(max_entries, 'Value'))                                                                                                                                                    
t3 = time.time()                                                                                                                                                                                            
print "Third groupby takes", t3-t2                                                                                                                                                                          

Example output:

n [29]: first groupby takes:  0.914537191391 for nrows= 1000000                                                                                                                                            
second groupby takes: 17.5580070019                                                                                                                                                                         
Third groupby takes 59.3013348579                                                                                                                                                                           

Is there a way to make it faster? Should I avoid pandas for a problem like this?

Also why is nlargest so slow? I was expecting it to be faster.

Upvotes: 0

Views: 597

Answers (1)

BENY
BENY

Reputation: 323276

You can using head

Your solution 1

%timeit dfg.groupby(dfg.index.get_level_values(0),group_keys=False,sort=False).apply(lambda x: x.sort_values('Value', ascending=False).iloc[:max_entries])    
1 loop, best of 3: 9.1 s per loop

My solution

%timeit dfg.sort_values('Value',ascending=False).groupby(level=0).head(max_entries)
1 loop, best of 3: 201 ms per loop

Upvotes: 4

Related Questions