user3833171
user3833171

Reputation: 43

weighted average in pandas pivot_table

I'm trying to generate pandas pivot table that calculates an average of values in a series of data columns weighted by the values in a fixed weights column, and am struggling to find an elegant and efficient way to do this.

df = pd.DataFrame([['A',10,1],['A',20,0],['B',10,1],['B',0,0]],columns=['Group','wt','val'])


Group   wt  val
0   A   10  1
1   A   20  0
2   B   10  1
3   B   0   0

I want to group by Group and return both a new weight (sum of df.wt -- easy peasy) and an average of df.val weighted by df.wt to yield this:


Group   weight  val
0   A   30  0.333
1   B   10  1.000

In the real application there are a large number of val columns and one weight column along with other columns that I want to apply different aggfuncs to. So while I realize I could do this by direct application of groupby, it's messier. Is there a way to roll my own aggfunc within pivot_table that would computer a weighted average?

Upvotes: 1

Views: 312

Answers (2)

Quang Hoang
Quang Hoang

Reputation: 150745

Here's an approach with groupby:

(df.assign(total=df.wt*df.val)
   .groupby('Group', as_index=False)
   .sum()
   .assign(val=lambda x: x['total']/x['wt'])
   .drop('total', axis=1)
)

Output:

  Group  wt       val
0     A  30  0.333333
1     B  10  1.000000

Update: for all val like columns:

# toy data
df = pd.DataFrame([['A',10,1,1],['A',20,0,1],['B',10,1,2],['B',0,0,1]],
                  columns=['Group','wt','val_a', 'val_b'])
# grouping sum
new_df = (df.filter(like='val')  # filter val columns
           .mul(df.wt, axis=0)   # multiply with weights
           .assign(wt=df.wt)     # attach weight
           .groupby(df.Group).sum()
)

# loop over columns and divide the weight sum
new_df.apply(lambda x: x/new_df['wt'] if x.name != 'wt' else x)

Output:

          val_a  val_b  wt
Group                     
A      0.333333    1.0  30
B      1.000000    2.0  10

Upvotes: 1

sammywemmy
sammywemmy

Reputation: 28689

This should work for multiple numerical columns:

Create a function that uses numpy average, with weights included.

Run a list comprehension on the groups in the groupby, and apply the function

Concatenate the output

df = pd.DataFrame([['A',10,1,2],['A',20,0,3],['B',10,1,2],['B',0,0,3]],columns=['Group','wt','val','vala'])




   Group    wt  val vala
0    A      10   1   2
1    A      20   0   3
2    B      10   1   2
3    B      0    0   3

#create function

def avg(group):

    df = pd.DataFrame()

    for col in group.columns.drop(['Group','wt']):

        A = group[col]
        B = group['wt']

        df['Group'] = group['Group'].unique()
        df['wt'] = B.sum()
        df[col] = np.average(A, weights=B)


    return df

#pipe function to the group in the list comprehension

 output = [group.pipe(avg) for name, group in df.groupby('Group')]

#concatenate dataframes

 pd.concat(output,ignore_index=True)


   Group    wt     val       vala
0   A      30   0.333333    2.666667
1   B      10   1.000000    2.000000

Upvotes: 0

Related Questions