Mike S
Mike S

Reputation: 1613

pandas apply() on a groupby() object being run many more times than there are groups

I've inherited some pandas code that I'm trying to optimize. One DataFrame, results, has been created with

results = pd.DataFrame(columns=['plan','volume','avg_denial_increase','std_dev_impact', 'avg_idr_increase', 'std_dev_idr_increase'])
for plan in my_df['plan_name'].unique():
    df1 = df[df['plan_name'] == plan]]
    df1['volume'].fillna(0, inplace=True)
    df1['change'] = df1['idr'] - df1['idr'].shift(1)
    df1['change'].fillna(0, inplace=True)
    df1['impact'] = df1['change'] * df1['volume']
    describe_impact = df1['impact'].describe()
    describe_change = df1['change'].describe()
    results = results.append({'plan': plan,
                              'volume': df1['volume'].mean(),
                              'avg_denial_increase': describe_impact['mean'],
                              'std_dev_impact': describe_impact['std'],
                              'avg_idr_increase': describe_change['mean'],
                              'std_dev_idr_increase': describe_change['std']}, 
                             ignore_index=True)

My first thought was to move everything from under the for-loop into a separate function, get_results_for_plan, and use pandas groupby() and apply() methods. But his has proven to be even slower. Running

%lprun -f get_results_for_plan my_df.groupby('plan_name', sort=False, as_index=False).apply(get_results_for_plan)

returns

Timer unit: 1e-06 s

Total time: 0.77167 s
File: <ipython-input-46-7c36b3902812>
Function: get_results_for_plan at line 1

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
     1                                           def get_results_for_plan(plan_df):
     2        94      33221.0    353.4      4.3      plan = plan_df.iloc[0]['plan_name']
     3        94      25901.0    275.5      3.4      plan_df['volume'].fillna(0, inplace=True)
     4        94      75765.0    806.0      9.8      plan_df['change'] = plan_df['idr'] - plan_df['idr'].shift(1)
     5        93      38653.0    415.6      5.0      plan_df['change'].fillna(0, inplace=True)
     6        93      57088.0    613.8      7.4      plan_df['impact'] = plan_df['change'] * plan_df['volume']
     7        93     204828.0   2202.5     26.5      describe_impact = plan_df['impact'].describe()
     8        93     201127.0   2162.7     26.1      describe_change = plan_df['change'].describe()
     9        93        129.0      1.4      0.0      return pd.DataFrame({'plan': plan,
    10        93      21703.0    233.4      2.8                           'volume': plan_df['volume'].mean(),
    11        93       4291.0     46.1      0.6                           'avg_denial_increase': describe_impact['mean'],
    12        93       1957.0     21.0      0.3                           'std_dev_impact': describe_impact['std'],
    13        93       2912.0     31.3      0.4                           'avg_idr_increase': describe_change['mean'],
    14        93       1783.0     19.2      0.2                           'std_dev_idr_increase': describe_change['std']},
    15        93     102312.0   1100.1     13.3                         index=[0])

The most glaring issue I see is the number of hits each line has. The number of groups, as counted by

len(my_df.groupby('plan_name', sort=False, as_index=False).groups)

is 72. So why are these lines being hit 94 or 93 times each? (This may be related to this issue, but in that case I'd expect the hit count to be num_groups + 1)

Update: In the %lprun call to groupby() above, removing sort=False reduces line hits to 80 for lines 2-6 and 79 for the rest. Still more hits than I'd think there should be, but a bit better .

Secondary question: are there better ways to optimize this particular code?

Upvotes: 3

Views: 161

Answers (1)

josemz
josemz

Reputation: 1312

Here's a rough draft of what I mean in my comment:

def append_to_list():
    l = []
    for _ in range(10000):
        l.append(np.random.random(4))
    return pd.DataFrame(l, columns=list('abcd'))

def append_to_df():
    cols = list('abcd')
    df = pd.DataFrame(columns=cols)
    for _ in range(10000):
        df = df.append({k: v for k, v in zip(cols, np.random.random(4))},
                       ignore_index=True)
    return df

%timeit append_to_list
# 31.5 ms ± 925 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

%timeit append_to_df
# 9.05 s ± 337 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

So probably the biggest benefit to your code would be this:

results = []
for plan in my_df['plan_name'].unique():
    df1 = df[df['plan_name'] == plan]]
    df1['volume'].fillna(0, inplace=True)
    df1['change'] = df1['idr'] - df1['idr'].shift(1)
    df1['change'].fillna(0, inplace=True)
    df1['impact'] = df1['change'] * df1['volume']
    describe_impact = df1['impact'].describe()
    describe_change = df1['change'].describe()
    results.append((plan, 
                    df1['volume'].mean(), 
                    describe_impact['mean'],
                    describe_impact['std'], 
                    describe_change['mean'], 
                    describe_change['std']))
results = pd.DataFrame(results, columns=['plan','volume','avg_denial_increase','std_dev_impact', 'avg_idr_increase', 'std_dev_idr_increase'])

Upvotes: 1

Related Questions