ely
ely

Reputation: 77434

Python Pandas: What causes slowdown in different column selection methods?

After seeing this question about replicating SQL select-statement-like behavior in Pandas, I added this answer showing two ways that could shorten the verbose syntax given in the accepted answer to that question.

After playing around with them, my two shorter-syntax methods are significantly slower, and I am hoping someone can explain why

You can assume any functions used below are either from Pandas, IPython, or from the question and answers linked above.

import pandas
import numpy as np
N = 100000
df = pandas.DataFrame(np.round(np.random.rand(N,5)*10))

def pandas_select(dataframe, select_dict):
    inds = dataframe.apply(lambda x: reduce(lambda v1,v2: v1 and v2,
                           [elem[0](x[key], elem[1])
                           for key,elem in select_dict.iteritems()]), axis=1)
    return dataframe[inds]



%timeit _ = df[(df[1]==3) & (df[2]==2) & (df[4]==5)]
%timeit _ = df[df.apply(lambda x: (x[1]==3) & (x[2]==2) & (x[4]==5), axis=1)]

import operator
select_dict = {1:(operator.eq,3), 2:(operator.eq,2), 4:(operator.eq,5)}
%timeit _ = pandas_select(df, select_dict)

The output I get is:

In [6]: %timeit _ = df[(df[1]==3) & (df[2]==2) & (df[4]==5)]
100 loops, best of 3: 4.91 ms per loop

In [7]: %timeit _ = df[df.apply(lambda x: (x[1]==3) & (x[2]==2) & (x[4]==5), axis=1)]
1 loops, best of 3: 1.23 s per loop

In [10]: %timeit _ = pandas_select(df, select_dict)
1 loops, best of 3: 1.6 s per loop

I can buy that the user of reduce, operator functions, and just the function overhead from my pandas_select function could slow it down. But it seems excessive. Inside of my function, I'm using the same syntax, df[key] logical_op value, but it's much slower.

I'm also puzzled why the apply version along axis=1 is so much slower. It should literally be just a shortening of the syntax, no?

Upvotes: 1

Views: 1094

Answers (1)

ecatmur
ecatmur

Reputation: 157374

When you write df[df.apply(lambda x: (x[1]==3) & (x[2]==2) & (x[4]==5), axis=1)], you're calling your lambda for each of the 100000 rows in the dataframe. This has substantial overhead as a Python method call must be executed for each row.

When you write df[(df[1]==3) & (df[2]==2) & (df[4]==5)], there's no overhead; instead, the operation is applied to each column in a single operation, and the loop is executed in native code with the potential for vectorization (e.g. SSE).

This isn't exclusive to Pandas; in general, any numpy operation will be much faster if you treat arrays and matrices in aggregate instead of calling Python functions or inner loops on individual items.

Upvotes: 5

Related Questions