Sara
Sara

Reputation: 973

pandas groupby has got slower

I must have done something wrong as my python scripts are getting slower. In this script, I have 2 columns and I just want the higher score for each common query value. I use pandas' groupby function to do that.Then I just keep every query that have score >= 90% of highest score.

startTime = datetime.now()

data = pd.read_csv(inputfile,names =['query', 'score'],sep='\t')

print "INPUT INFORMATION"
print "Inputfile has:", "{:,}".format(data.shape[0]), "records"

print data.dtypes

print "Time test 1 :", str(datetime.now()-startTime)    
data['max'] = data.groupby('queryid')['bitscore'].transform(lambda x: x.max())
print "Time test 2", str(datetime.now()-startTime)    
data = data[data['bitscore']>=0.9*data['max']] 
print "Time test 3", str(datetime.now()-startTime)

Here is the output:

INPUT INFORMATION
Blast inputfile has: 1,367,808 records

queryid             object
subjectid           object
bitscore           float64
dtype: object

Time test 1 : 0:00:05.075944
Time test 2 0:30:40.750674
Time test 3 0:30:41.317064

There are a lot of records, but still... The computer memory is 100+ gigs. I ran it yesterday and it took 26 minutes to reach "test2". Now it's 30 minutes. Do you think I should wipe python clean and install it again? Somebody ever had this happening?

Upvotes: 0

Views: 680

Answers (1)

Jeff
Jeff

Reputation: 128978

For completeness, using 0.14.1

In [13]: pd.set_option('max_rows',10)

In [9]: N = 1400000

In [10]: ngroups = 1000

In [11]: groups = [ "A%04d" % i for i in xrange(ngroups) ]

In [12]: df = DataFrame(dict(A = np.random.choice(groups,size=N,replace=True), B = np.random.randn(N)))

In [14]: df
Out[14]: 
             A         B
0        A0722  0.621374
1        A0390 -0.843030
2        A0897 -1.633165
3        A0546  0.483448
4        A0366  1.866380
...        ...       ...
1399995  A0515 -1.051668
1399996  A0591 -1.216455
1399997  A0766 -0.914020
1399998  A0635  0.258893
1399999  A0577  1.874328

[1400000 rows x 2 columns]

In [15]: df.groupby('A')['B'].transform('max')
Out[15]: 
0    3.688245
1    3.829529
2    3.717359
...
1399997    4.213080
1399998    3.121092
1399999    2.990630
Name: B, Length: 1400000, dtype: float64

In [16]: %timeit df.groupby('A')['B'].transform('max')
1 loops, best of 3: 437 ms per loop

In [17]: ngroups = 10000

In [18]: groups = [ "A%04d" % i for i in xrange(ngroups) ]

In [19]: df = DataFrame(dict(A = np.random.choice(groups,size=N,replace=True), B = np.random.randn(N)))

In [20]: %timeit df.groupby('A')['B'].transform('max')
1 loops, best of 3: 1.43 s per loop

In [23]: ngroups = 100000

In [24]: groups = [ "A%05d" % i for i in xrange(ngroups) ]

In [25]: df = DataFrame(dict(A = np.random.choice(groups,size=N,replace=True), B = np.random.randn(N)))

In [27]: %timeit df.groupby('A')['B'].transform('max')
1 loops, best of 3: 10.3 s per loop

So the transformation is roughly O(number_of_groups)

Upvotes: 2

Related Questions