Reputation: 666
I have trouble using the window function instead GroupBy to aggregate each user, in my case 110 and 220 user id:
1- count rows for each p_uuid
2- create new columns with min and max timestamp for each p_uuid
df = spark.createDataFrame([(1, 110, 'aaa', 'walk', 'work', '2019-09-28 13:40:19-04:00'),
(2, 110, 'aaa', 'walk', 'work', '2019-09-28 13:40:19-04:01'),
(3, 110, 'aaa', 'walk', 'work', '2019-09-28 13:40:19-04:02'),
(4, 110, 'aaa', 'metro', 'work', '2019-09-28 13:41:19-04:00'),
(5, 110, 'aaa', 'metro', 'work', '2019-09-28 13:41:19-04:01'),
(6, 110, 'aaa', 'walk', 'work', '2019-09-28 13:42:19-04:00'),
(7, 110, 'aaa', 'walk', 'work', '2019-09-28 13:42:19-04:01'),
(8, 110, 'bbb', 'bike', 'home', '2019-09-17 14:40:19-04:00'),
(9, 110, 'bbb', 'bus', 'home', '2019-09-17 14:41:19-04:00'),
(10, 110, 'bbb', 'walk', 'home', '2019-09-17 14:43:19-04:00'),
(16, 110, 'ooo', None, None, '2019-08-29 16:01:19-04:00'),
(17, 110, 'ooo', None, None, '2019-08-29 16:02:19-04:00'),
(18, 110, 'ooo', None, None, '2019-08-29 16:02:19-04:00'),
(19, 222, 'www', 'car', 'work', '2019-09-28 08:00:19-04:00'),
(20, 222, 'www', 'metro', 'work', '2019-09-28 08:01:19-04:00'),
(21, 222, 'www', 'walk', 'work', '2019-09-28 08:02:19-04:00'),
(22, 222, 'xxx', 'walk', 'friend', '2019-09-17 08:40:19-04:00'),
(23, 222, 'xxx', 'bike', 'friend', '2019-09-17 08:42:19-04:00'),
(24, 222, 'xxx', 'bus', 'friend', '2019-09-17 08:43:19-04:00'),
(30, 222, 'ooo', None, None, '2019-08-29 10:00:19-04:00'),
(31, 222, 'ooo', None, None, '2019-08-29 10:01:19-04:00'),
(32, 222, 'ooo', None, None, '2019-08-29 10:02:19-04:00')],
['idx', 'u_uuid', 'p_uuid', 'mode', 'place', 'timestamp']
)
df.show(30, False)
I used
win = Window.partitionBy("u_uuid", "p_uuid").orderBy("timestamp")
df.withColumn("count_", F.count('p_uuid').over(win))
df.withColumn("max_timestamp", F.max("timestamp").over(win))
df.withColumn("min_timestamp", F.min("timestamp").over(win))
It doesn't seem to work (ex: get max_)
remarque: forget trip_id
, subtrip_id
and track_id
columns
Upvotes: 1
Views: 119
Reputation: 32660
You have to extend the window to the entire frame using rowsBetween
:
win = Window.partitionBy("u_uuid", "p_uuid").orderBy("timestamp").rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)
Upvotes: 1
Reputation: 31490
You need to use unboundedPreceding,unboundedFollowing
with .partitionBy
clause by default value is unboundedPreceding,currentRow
if we provide orderBy clause.
Add .rowsBetween
in your window spec and run again.
win = Window.partitionBy("u_uuid", "p_uuid").orderBy("timestamp").rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)
Example:
df.withColumn("max_timestamp", max("timestamp").over(win)).show(10,False)
+---+------+------+-----+------+-------------------------+-------------------------+
|idx|u_uuid|p_uuid|mode |place |timestamp |max_timestamp |
+---+------+------+-----+------+-------------------------+-------------------------+
|8 |110 |bbb |bike |home |2019-09-17 14:40:19-04:00|2019-09-17 14:43:19-04:00|
|9 |110 |bbb |bus |home |2019-09-17 14:41:19-04:00|2019-09-17 14:43:19-04:00|
|10 |110 |bbb |walk |home |2019-09-17 14:43:19-04:00|2019-09-17 14:43:19-04:00|
|16 |110 |ooo |null |null |2019-08-29 16:01:19-04:00|2019-08-29 16:02:19-04:00|
|17 |110 |ooo |null |null |2019-08-29 16:02:19-04:00|2019-08-29 16:02:19-04:00|
|18 |110 |ooo |null |null |2019-08-29 16:02:19-04:00|2019-08-29 16:02:19-04:00|
+---+------+------+-----+------+-------------------------+-------------------------+
Upvotes: 1