gabboshow
gabboshow

Reputation: 5569

pandas groupby events across different days

import pandas as pd

df = pd.DataFrame(data=[[1,1,10],[1,2,50],[1,3,20],[1,4,24],
[2,1,20],[2,2,10],[2,3,20],[2,4,34],[3,1,10],[3,2,50],
[3,3,20],[3,4,24],[3,5,24],[4,1,24]],columns=['day','hour','event'])

df
Out[4]: 
    day  hour  event
0     1     1     10
1     1     2     50
2     1     3     20 <- yes
3     1     4     24 <- yes
4     2     1     20 <- yes
5     2     2     10
6     2     3     20 <- yes
7     2     4     34 <- yes
8     3     1     10 <- yes
9     3     2     50
10    3     3     20 <- yes
11    3     4     24 <- yes
11    3     5     24 <- yes (here we have also an hour more)
12    4     1     24 <- yes 

now I would like to sum the number of events from hour=3 to hour=1 of the following day..

The expected result should be

0 64
1 64
2 92

Upvotes: 3

Views: 445

Answers (3)

jezrael
jezrael

Reputation: 862851

#convert columns to datetimes, for same day of next day subtract 2 hours:
a = pd.to_datetime(df['day'].astype(str) + ':' + df['hour'].astype(str), format='%d:%H')- pd.Timedelta(2, unit='h')
#get hours between 1 and 23 only ->in real 3,4...23,1
hours = a.dt.hour.between(1,23)
#create consecutives groups by filtering
df['a'] = hours.ne(hours.shift()).cumsum()
#filter only expected hours
df = df[hours]
#aggregate
df = df.groupby('a')['event'].sum().reset_index(drop=True)
print (df)
0    10
1    64
2    64
3    92
Name: event, dtype: int64

Another similar solution:

#create datetimeindex
df.index = pd.to_datetime(df['day'].astype(str)+':'+df['hour'].astype(str), format='%d:%H')
#shift by 2 hours 
df = df.shift(-2, freq='h')
#filter hours and first unnecessary event
df = df[(df.index.hour != 0) & (df.index.year != 1899)]
#aggregate
df = df.groupby(df.index.day)['event'].sum().reset_index(drop=True)
print (df)
0    64
1    64
2    92
Name: event, dtype: int64

Another solution:

#filter out first values less as 3 and hours == 2
df = df[(df['hour'].eq(3).cumsum() > 0) & (df['hour'] != 2)]
#subtract 1 day by condition and aggregate
df = df['event'].groupby(np.where(df['hour'] < 3, df['day'] - 1, df['day'])).sum()
print (df)
1    64
2    64
3    92
Name: event, dtype: int64

Upvotes: 1

jpp
jpp

Reputation: 164703

One way is to define a grouping column via pd.DataFrame.apply with a custom function.

Then groupby this new column.

df['grouping'] = df.apply(lambda x: x['day']-2 if x['hour'] < 3 else x['day']-1, axis=1)

res = df.loc[(df['hour'] != 2) & (df['grouping'] >= 0)]\
        .groupby('grouping')['event'].sum()\
        .reset_index(drop=True)

Result

0    64
1    64
2    92
Name: event, dtype: int64

Upvotes: 0

fuglede
fuglede

Reputation: 18201

One option would be to just remove all entries for which hour is 2, then combine the results into groups of 3 and sum those;

v = df[df.hour != 2][1:].event
np.add.reduceat(v, range(0, len(v), 3))

Upvotes: 0

Related Questions