Reputation: 595
This is my current Dataframe, csv file sorted by login time, and then reset_index
Login Time User Port
0 2019-10-19 22:00:05 Jane 22
1 2019-10-19 22:00:05 Jane 22
2 2019-10-19 22:02:30 John 22
3 2019-10-19 22:02:44 John 22
4 2019-10-19 22:02:54 John 22
5 2019-10-19 22:03:59 Mary 22
6 2019-10-19 22:04:12 John 22
7 2019-10-19 22:04:17 John 22
8 2019-10-19 22:04:42 Kathy 22
9 2019-10-19 22:04:42 Kathy 22
What i want, is a separate column counting how many times the user had logged in for the last 30seconds, like this:
Login Time User Port LastLogin30Sec
0 2019-10-19 22:00:05 Jane 22 1
1 2019-10-19 22:00:05 Jane 22 2
2 2019-10-19 22:02:30 John 22 1
3 2019-10-19 22:02:44 John 22 2
4 2019-10-19 22:02:54 John 22 3
5 2019-10-19 22:03:59 Mary 22 1
6 2019-10-19 22:04:12 John 22 1
7 2019-10-19 22:04:17 John 22 2
8 2019-10-19 22:04:42 Kathy 22 1
9 2019-10-19 22:04:42 Kathy 22 2
So i decided to use rolling to specify the time period and count the rows. Rolling with a time period needs DateTime to be indexed
df = df.set_index("Login Time")
df[df["User"]=="John"]["Port"].rolling("30s").count()
Login Time
2019-10-19 22:02:30 1.0
2019-10-19 22:02:44 2.0
2019-10-19 22:02:54 3.0
2019-10-19 22:04:12 1.0
2019-10-19 22:04:17 2.0
Name: Port, dtype: float64
Okay that code works. But i would like to do this for every user, so i decided to leverage on groupby...and this is where is hit a stumbling block.
Because rolling by time period needs a datetime index, i have to preserve the index in groupby. But the index in non-unique
df["Count"] = df.groupby(["User"], as_index=False)['Port'].rolling("30s").count()
ValueError: cannot handle a non-unique multi-index!
So i figured, i might as well dont set the time index in the first place, and set it after the groupby operation....but you can't set_index on a groupbydataframe
df["Count"] = df.groupby(["User"], as_index=False).set_index("Login Time")["Port"].rolling("30s").count()
AttributeError: Cannot access callable attribute 'set_index' of 'DataFrameGroupBy' objects, try using the 'apply' method
And i dont see how apply would work for me.
Is anyone able to advise further? The whole problem seems to center around .rolling time window needs a datetimeindex rather than just a datetime series
Upvotes: 1
Views: 383
Reputation: 5451
You can use apply function in which you can perform your rolling function for each group
df = pd.DataFrame([[0, pd.Timestamp('2019-10-19 22:00:05'), 'Jane', '22'], [1, pd.Timestamp('2019-10-19 22:00:05'), 'Jane', '22'], [2, pd.Timestamp('2019-10-19 22:02:30'), 'John', '22'], [3, pd.Timestamp('2019-10-19 22:02:44'), 'John', '22'], [4, pd.Timestamp('2019-10-19 22:02:54'), 'John', '22'], [5, pd.Timestamp('2019-10-19 22:03:59'), 'Mary', '22'], [6, pd.Timestamp('2019-10-19 22:04:12'), 'John', '22'], [7, pd.Timestamp('2019-10-19 22:04:17'), 'John', '22'], [8, pd.Timestamp('2019-10-19 22:04:42'), 'Kathy', '22'], [9, pd.Timestamp('2019-10-19 22:04:42'), 'Kathy', '22']], columns=('id', 'Login-Time', 'User', 'Port'))
df2 = df.groupby("User").apply(lambda g: g.set_index("Login-Time")["Port"].rolling("30s").count()).reset_index()
print(df2)
Result
User Login-Time Port
0 Jane 2019-10-19 22:00:05 1.0
1 Jane 2019-10-19 22:00:05 2.0
2 John 2019-10-19 22:02:30 1.0
3 John 2019-10-19 22:02:44 2.0
4 John 2019-10-19 22:02:54 3.0
5 John 2019-10-19 22:04:12 1.0
6 John 2019-10-19 22:04:17 2.0
7 Kathy 2019-10-19 22:04:42 1.0
8 Kathy 2019-10-19 22:04:42 2.0
9 Mary 2019-10-19 22:03:59 1.0
Upvotes: 2