Reputation: 53
I have a dataframe that has the columns: user_id, product_id, created_at and removed_at. I want add a boolean column "is_switch" that is True if, for a given user, the timestamp for created_at is the within a timedelta (let's say 1 second) as a removed_at for any other row in that user group. How can I do this without iterating over every row, or is that the appropriate way to do it?
I am trying to write a custom function to use with .apply that will run on each user group, but i'm not sure how to compare rows with all the other rows in one shot.
# Code to create sample data frame.
# the below are just timestamps that are within a second of each other.
import datetime
a = datetime.datetime.now()
a2 = a-datetime.timedelta(seconds=1)
b = datetime.datetime.now()-datetime.timedelta(days=4)
b2 = b-datetime.timedelta(seconds=1)
c = datetime.datetime.now()-datetime.timedelta(days=40)
c2 = c - datetime.timedelta(seconds=1)
d = datetime.datetime.now()-datetime.timedelta(days=30)
d2 = d - datetime.timedelta(seconds=1)
e = datetime.datetime.now()-datetime.timedelta(days=60)
e2 = e - datetime.timedelta(seconds=1)
f = datetime.datetime.now()-datetime.timedelta(days=100)
g = datetime.datetime.now()-datetime.timedelta(days=99)
df = pd.DataFrame(
{"user_id" : [0, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4],
"product_id" : [100, 101, 102, 101, 102, 104, 105, 106, 107, 105, 106, 107],
"created_at" : [a, a, b, c, d, c, f, f, e2, f, f, d],
"removed_at" : ['NaT', b2, 'NaT', d2, 'NaT', 'NaT', e, g, 'NaT', e2, g, b]},
index = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
df
print(df)
yields this:
user_id product_id created_at removed_at
0 0 100 2019-08-04 09:15:05.200981 NaT
1 1 101 2019-08-04 09:15:05.200981 2019-07-31 09:15:04.201063
2 1 102 2019-07-31 09:15:05.201063 NaT
3 2 101 2019-06-25 09:15:05.201121 2019-07-05 09:15:04.201179
4 2 102 2019-07-05 09:15:05.201179 NaT
5 2 104 2019-06-25 09:15:05.201121 NaT
6 3 105 2019-04-26 09:15:05.201290 2019-06-05 09:15:05.201235
7 3 106 2019-04-26 09:15:05.201290 2019-04-27 09:15:05.201324
8 3 107 2019-06-05 09:15:04.201235 NaT
9 4 105 2019-04-26 09:15:05.201290 2019-06-05 09:15:04.201235
10 4 106 2019-04-26 09:15:05.201290 2019-04-27 09:15:05.201324
11 4 107 2019-07-05 09:15:05.201179 2019-07-31 09:15:05.201063
So I currently have something like this:
group_by_user = df.groupby('user_id')
def calculate_is_switch(grp):
# What goes here? how can i do it without iterating over each row?
# group_by_user.apply(calculate_is_switch)
I would like to add the 'is_switch' column so the output is this:
user_id product_id created_at removed_at \
0 0 100 2019-08-04 09:15:05.200981 NaT
1 1 101 2019-08-04 09:15:05.200981 2019-07-31 09:15:04.201063
2 1 102 2019-07-31 09:15:05.201063 NaT
3 2 101 2019-06-25 09:15:05.201121 2019-07-05 09:15:04.201179
4 2 102 2019-07-05 09:15:05.201179 NaT
5 2 104 2019-06-25 09:15:05.201121 NaT
6 3 105 2019-04-26 09:15:05.201290 2019-06-05 09:15:05.201235
7 3 106 2019-04-26 09:15:05.201290 2019-04-27 09:15:05.201324
8 3 107 2019-06-05 09:15:04.201235 NaT
9 4 105 2019-04-26 09:15:05.201290 2019-06-05 09:15:04.201235
10 4 106 2019-04-26 09:15:05.201290 2019-04-27 09:15:05.201324
11 4 107 2019-07-05 09:15:05.201179 2019-07-31 09:15:05.201063
is_switch
0 False
1 False
2 True
3 False
4 True
5 False
6 False
7 False
8 True
9 False
10 False
11 False
Upvotes: 3
Views: 75
Reputation: 862671
Use GroupBy.apply
with custom function - first replace missing values by some default value datetime, e.g. Timestamp.min
and then per groups compare columns with broadcasting - all values with created_at
by removed_at
, get absolute values, compare by 1 second and return at least one True
per rows by any
:
val = pd.Timedelta(1, unit='s')
def f(x):
y = x['created_at'].values - x['removed_at'].values[:, None]
y = np.any((np.abs(y).astype(np.int64) <= val.value), axis=0)
return pd.Series(y, index=x.index)
df['is_switch'] = (df.assign(removed_at = df['removed_at'].fillna(pd.Timestamp.min))
.groupby('user_id')
.apply(f)
.reset_index(level=0, drop=True))
print(df)
user_id product_id created_at removed_at \
0 0 100 2019-08-04 16:22:39.309093 NaT
1 1 101 2019-08-04 16:22:39.309093 2019-07-31 16:22:38.309093
2 1 102 2019-07-31 16:22:39.309093 NaT
3 2 101 2019-06-25 16:22:39.309093 2019-07-05 16:22:38.309093
4 2 102 2019-07-05 16:22:39.309093 NaT
5 2 104 2019-06-25 16:22:39.309093 NaT
6 3 105 2019-04-26 16:22:39.309093 2019-06-05 16:22:39.309093
7 3 106 2019-04-26 16:22:39.309093 2019-04-27 16:22:39.309093
8 3 107 2019-06-05 16:22:38.309093 NaT
9 4 105 2019-04-26 16:22:39.309093 2019-06-05 16:22:38.309093
10 4 106 2019-04-26 16:22:39.309093 2019-04-27 16:22:39.309093
11 4 107 2019-07-05 16:22:39.309093 2019-07-31 16:22:39.309093
is_switch
0 False
1 False
2 True
3 False
4 True
5 False
6 False
7 False
8 True
9 False
10 False
11 False
Upvotes: 3
Reputation: 71580
One-liner would be:
print(~df['created_at'].sub(df.groupby('user_id').transform('first')['created_at']).dt.days.between(-1, 1))
Output:
0 False
1 False
2 True
3 False
4 True
5 False
Name: created_at, dtype: bool
Upvotes: 0