Reputation: 1894
I have the following data:
,dateTime,magnitude,occurrence,dateTime_s
1,2017-11-20 08:00:09.052260,12861,1,2017-11-20 08:00:09.000000
2,2017-11-20 08:00:09.052270,12868.12,1,2017-11-20 08:00:09.000000
3,2017-11-20 08:00:09.052282,12868.12,1,2017-11-20 08:00:09.000000
4,2017-11-20 08:00:09.052291,12867.5,2,2017-11-20 08:00:09.000000
5,2017-11-20 08:00:09.052315,12867.5,4,2017-11-20 08:00:09.000000
6,2017-11-20 08:00:09.052315,12867,1,2017-11-20 08:00:09.000000
7,2017-11-20 08:00:09.052315,12865.5,1,2017-11-20 08:00:09.000000
8,2017-11-20 08:00:09.052315,12865.89,1,2017-11-20 08:00:09.000000
9,2017-11-20 08:00:12.064744,12867.5,1,2017-11-20 08:00:12.000000
10,2017-11-20 08:00:12.131555,12868.5,2,2017-11-20 08:00:12.000000
11,2017-11-20 08:00:12.333511,12868.5,4,2017-11-20 08:00:12.000000
12,2017-11-20 08:00:12.333511,12869.95,2,2017-11-20 08:00:12.000000
13,2017-11-20 08:00:12.341516,12869.5,1,2017-11-20 08:00:12.000000
14,2017-11-20 08:00:12.343538,12868.5,1,2017-11-20 08:00:12.000000
15,2017-11-20 08:00:12.343538,12868.17,5,2017-11-20 08:00:12.000000
16,2017-11-20 08:00:12.343538,12867.5,2,2017-11-20 08:00:12.000000
17,2017-11-20 08:00:14.148704,12882.5,1,2017-11-20 08:00:14.000000
18,2017-11-20 08:00:14.148748,12882.5,1,2017-11-20 08:00:14.000000
19,2017-11-20 08:00:14.218977,12883.66,1,2017-11-20 08:00:14.000000
20,2017-11-20 08:00:14.218977,12883.5,1,2017-11-20 08:00:14.000000
21,2017-11-20 08:00:14.385283,12882.09,1,2017-11-20 08:00:14.000000
22,2017-11-20 08:00:14.388518,12881.5,1,2017-11-20 08:00:14.000000
23,2017-11-20 08:00:14.577002,12882.5,1,2017-11-20 08:00:14.000000
And I am using the following code to aggregate it by time (as it's milis and I need it by seconds.
import pandas as pd
import numpy as np
df = pd.read_csv('C:/Users/Data/test.csv')
print(df.head(30))
groups = df.groupby('dateTime_s')
df_grouped = (groups.agg({
'magnitude': np.mean,
'occurrence': np.sum,
}))
print(df_grouped.head())
The result is good:
magnitude occurrence
dateTime_s
2017-11-20 08:00:09.000000 12866.328750 12
2017-11-20 08:00:12.000000 12868.515000 18
2017-11-20 08:00:14.000000 12882.607143 7
But for my research I need to add the most frequent magnitude and it's occurrence. How can I groupby (inside current groupby) and calculate the magnitude with the most frequency and to display both the magnitude and frequency?
I am looking for a result like this:
groupby magnitude
dateTime_s magnitude occurrence max sum
2017-11-20 08:00:09.000000 12866.32875 12 12867.5 6
2017-11-20 08:00:12.000000 12868.515 18 12868.5 7
2017-11-20 08:00:14.000000 12882.607143 7 12882.5 3
Upvotes: 1
Views: 298
Reputation: 862641
I believe you need custom function for sum
of occurrence
values by top magnitude
values:
groups = df.groupby('dateTime_s')
df_grouped = (groups.agg({
'magnitude': np.mean,
'occurrence': np.sum,
}))
#print (df_grouped)
def f(x):
a = x['magnitude'].value_counts().index[0]
b = x.loc[x['magnitude'] == a, 'occurrence'].sum()
return pd.Series([a,b],['max magn','freq oc'])
df_grouped1 = groups.apply(f)
#print (df_grouped1)
df = pd.concat([df_grouped, df_grouped1], axis=1)
print (df)
magnitude occurrence max magn freq oc
dateTime_s
2017-11-20 08:00:09 12866.328750 12 12867.5 6.0
2017-11-20 08:00:12 12868.515000 18 12868.5 7.0
2017-11-20 08:00:14 12882.607143 7 12882.5 3.0
Or only custom function:
groups = df.groupby('dateTime_s')
def f(x):
a = x['magnitude'].value_counts().index[0]
b = x.loc[x['magnitude'] == a, 'occurrence'].sum()
c = x['magnitude'].mean()
d = x['occurrence'].sum()
return pd.Series([a,b,c,d],['max magn','freq oc', 'mean', 'sum'])
df_grouped1 = groups.apply(f)
print (df_grouped1)
max magn freq oc mean sum
dateTime_s
2017-11-20 08:00:09 12867.5 6.0 12866.328750 12.0
2017-11-20 08:00:12 12868.5 7.0 12868.515000 18.0
2017-11-20 08:00:14 12882.5 3.0 12882.607143 7.0
Upvotes: 2