delica
delica

Reputation: 1707

Aggregations over specific columns of a large dataframe, with named output

I am looking for a way to aggregate over a large dataframe, possibly using groupby. Each group would be based on either pre-specified columns or regex, and the aggregation should produce a named output.

This produces a sample dataframe:

import pandas as pd
import itertools
import numpy as np

col = "A,B,C".split(',')
col1 = "1,2,3,4,5,6,7,8,9".split(',')
col2 = "E,F,G".split(',')

all_dims = [col, col1, col2]
all_keys = ['.'.join(i) for i in itertools.product(*all_dims)]
rng = pd.date_range(end=pd.Timestamp.today().date(), periods=12, freq='M')
df = pd.DataFrame(np.random.randint(0, 1000, size=(len(rng), len(all_keys))), columns=all_keys, index=rng)

Above produces a dataframe with one year's worth of monthly data, with 36 columns with following names:

['A.1.E', 'A.1.F', 'A.1.G', 'A.2.E', 'A.2.F', 'A.2.G', 'A.3.E', 'A.3.F',
       'A.3.G', 'A.4.E', 'A.4.F', 'A.4.G', 'A.5.E', 'A.5.F', 'A.5.G', 'A.6.E',
       'A.6.F', 'A.6.G', 'A.7.E', 'A.7.F', 'A.7.G', 'A.8.E', 'A.8.F', 'A.8.G',
       'A.9.E', 'A.9.F', 'A.9.G', 'B.1.E', 'B.1.F', 'B.1.G', 'B.2.E', 'B.2.F',
       'B.2.G', 'B.3.E', 'B.3.F', 'B.3.G', 'B.4.E', 'B.4.F', 'B.4.G', 'B.5.E',
       'B.5.F', 'B.5.G', 'B.6.E', 'B.6.F', 'B.6.G', 'B.7.E', 'B.7.F', 'B.7.G',
       'B.8.E', 'B.8.F', 'B.8.G', 'B.9.E', 'B.9.F', 'B.9.G', 'C.1.E', 'C.1.F',
       'C.1.G', 'C.2.E', 'C.2.F', 'C.2.G', 'C.3.E', 'C.3.F', 'C.3.G', 'C.4.E',
       'C.4.F', 'C.4.G', 'C.5.E', 'C.5.F', 'C.5.G', 'C.6.E', 'C.6.F', 'C.6.G',
       'C.7.E', 'C.7.F', 'C.7.G', 'C.8.E', 'C.8.F', 'C.8.G', 'C.9.E', 'C.9.F',
       'C.9.G']

What I would like now is to be able aggregate over the dataframe and take certain column combinations and produce named outputs. For example, one rules might be that I will take all 'A.*.E' columns (that have any number in the middle), sum them and produce a named output column called 'A.SUM.E'. And then do the same for 'A.*.F', 'A.*.G' and so on.

I have looked into pandas 25 named aggregation which allows me to name my outputs but I couldn't see how to simultaneously capture the right column combinations and produce the right output names.

If you need to reshape the dataframe to make a workable solution, that is fine as well.

Note, I am aware I could do something like this in a Python loop but I am looking for a pandas way to do it.

Upvotes: 0

Views: 99

Answers (1)

Stef
Stef

Reputation: 30589

Not a groupby solution and it uses a loop but I think it's nontheless rather elegant: first get a list of unique column from - to combinations using a set and then do the sums using filter:

cols = sorted([(x[0],x[1]) for x in set([(x.split('.')[0], x.split('.')[-1]) for x in df.columns])])
for c0, c1 in cols:
    df[f'{c0}.SUM.{c1}'] = df.filter(regex = f'{c0}\.\d+\.{c1}').sum(axis=1)

Result:

            A.1.E  A.1.F  A.1.G  A.2.E  ...  B.SUM.G  C.SUM.E  C.SUM.F  C.SUM.G
2018-08-31    978    746    408    109  ...     4061     5413     4102     4908
2018-09-30    923    649    488    447  ...     5585     3634     3857     4228
2018-10-31    911    359    897    425  ...     5039     2961     5246     4126
2018-11-30     77    479    536    509  ...     4634     4325     2975     4249
2018-12-31    608    995    114    603  ...     5377     5277     4509     3499
2019-01-31    138    612    363    218  ...     4514     5088     4599     4835
2019-02-28    994    148    933    990  ...     3907     4310     3906     3552
2019-03-31    950    931    209    915  ...     4354     5877     4677     5557
2019-04-30    255    168    357    800  ...     5267     5200     3689     5001
2019-05-31    593    594    824    986  ...     4221     2108     4636     3606
2019-06-30    975    396    919    242  ...     3841     4787     4556     3141
2019-07-31    350    312    104    113  ...     4071     5073     4829     3717



If you want to have the result in a new DataFrame, just create an empty one and add the columns to it:

result = pd.DataFrame()
for c0, c1 in cols:
    result[f'{c0}.SUM.{c1}'] = df.filter(regex = f'{c0}\.\d+\.{c1}').sum(axis=1)

Update: using simple groupby (which is even more simple in this particular case):

def grouper(col):
    c = col.split('.')
    return f'{c[0]}.SUM.{c[-1]}'

df.groupby(grouper, axis=1).sum()

Upvotes: 2

Related Questions