Jeroen
Jeroen

Reputation: 841

Pyspark group dataframe within time interval

I have a PYSPARK dataframe which is sorted ('timestamp' and 'ship' ascend):

+----------------------+------+
|        timestamp     | ship |
+----------------------+------+
| 2018-08-01 06:01:00  |    1 |
| 2018-08-01 06:01:30  |    1 |
| 2018-08-01 09:00:00  |    1 |
| 2018-08-01 09:00:00  |    2 |
| 2018-08-01 10:15:43  |    2 |
| 2018-08-01 11:00:01  |    3 |
| 2018-08-01 06:00:13  |    4 |
| 2018-08-01 13:00:00  |    4 |
| 2018-08-13 14:00:00  |    5 |
| 2018-08-13 14:15:03  |    5 |
| 2018-08-13 14:45:08  |    5 |
| 2018-08-13 14:50:00  |    5 |
+-----------------------------+

I want to add a new column to the dataframe called 'trip'. A trip is defined as a ship-number which sails within 2 hours from the start of the ship-record in the dataframe. If within the two hours the ship number changes, a new trip number should be added to the dataframe column 'trip'.

Desired output looks like:

+----------------------+------+-------+
|        timestamp     | ship | trip  |
+----------------------+------+-------+
| 2018-08-01 06:01:00  |    1 |    1  | # start new ship number
| 2018-08-01 06:01:30  |    1 |    1  | # still within 2 hours of same ship number
| 2018-08-01 09:00:00  |    1 |    2  | # more than 2 hours of same ship number = new trip
| 2018-08-01 09:00:00  |    2 |    3  | # new ship number = new trip
| 2018-08-01 10:15:43  |    2 |    3  | # still within 2 hours of same ship number
| 2018-08-01 11:00:01  |    3 |    4  | # new ship number = new trip
| 2018-08-01 06:00:13  |    4 |    5  | # new ship number = new trip
| 2018-08-01 13:00:00  |    4 |    6  | # more than 2 hours of same ship number = new trip
| 2018-08-13 14:00:00  |    5 |    7  | # new ship number = new trip
| 2018-08-13 14:15:03  |    5 |    7  | # still within 2 hours of same ship number
| 2018-08-13 14:45:08  |    5 |    7  | # still within 2 hours of same ship number
| 2018-08-13 14:50:00  |    5 |    7  | # still within 2 hours of same ship number
+-----------------------------+-------+

In Pandas it would be done as such:

dt_trip = 2 # time duration trip per ship (in hours)
total_time = df['timestamp'] - df.groupby('name')['timestamp'].transform('min')
trips = total_time.dt.total_seconds().fillna(0)//(dt_trip*3600)
df['trip'] = df.groupby(['name', trips]).ngroup()+1

How would this be done in PYSPARK?

Upvotes: 1

Views: 739

Answers (1)

murtihash
murtihash

Reputation: 8410

Try this using window functions, row_number(), collect_list(), and an incremental sum over conditions created.

from pyspark.sql import functions as F
from pyspark.sql.window import Window

w1=Window().partitionBy("ship").orderBy(F.unix_timestamp("timestamp")).rangeBetween(-7199, Window.currentRow)
w2=Window().partitionBy("ship").orderBy("timestamp")
w3=Window().orderBy("ship","timestamp")

df.withColumn("trip", F.sum(F.when(F.row_number().over(w2)==1, F.lit(1))\
                       .when(F.size(F.collect_list("ship").over(w1))==1, F.lit(1))\
                       .otherwise(F.lit(0))).over(w3)).orderBy("ship","timestamp").show()

#+-------------------+----+----+
#|          timestamp|ship|trip|
#+-------------------+----+----+
#|2018-08-01 06:01:00|   1|   1|
#|2018-08-01 06:01:30|   1|   1|
#|2018-08-01 09:00:00|   1|   2|
#|2018-08-01 09:00:00|   2|   3|
#|2018-08-01 10:15:43|   2|   3|
#|2018-08-01 11:00:01|   3|   4|
#|2018-08-01 06:00:13|   4|   5|
#|2018-08-01 13:00:00|   4|   6|
#|2018-08-13 14:00:00|   5|   7|
#|2018-08-13 14:15:03|   5|   7|
#|2018-08-13 14:45:08|   5|   7|
#|2018-08-13 14:50:00|   5|   7|
#+-------------------+----+----+

Upvotes: 2

Related Questions