Kierk
Kierk

Reputation: 498

Select nth row after orderby in pyspark dataframe

I want to select the second row for each group of names. I used orderby to sort by name and then the purchase date/timestamp. It is important that I select the second purchase for each name (by datetime).

Here is the data to build dataframe:

data = [
  ('George', datetime(2020, 3, 24, 3, 19, 58), datetime(2018, 2, 24, 3, 22, 55)),
  ('Andrew', datetime(2019, 12, 12, 17, 21, 30), datetime(2019, 7, 21, 2, 14, 22)),
  ('Micheal', datetime(2018, 11, 22, 13, 29, 40), datetime(2018, 5, 17, 8, 10, 19)),
  ('Maggie', datetime(2019, 2, 8, 3, 31, 23), datetime(2019, 5, 19, 6, 11, 33)),
  ('Ravi', datetime(2019, 1, 1, 4, 19, 47), datetime(2019, 1, 1, 4, 22, 55)),
  ('Xien', datetime(2020, 3, 2, 4, 33, 51), datetime(2020, 5, 21, 7, 11, 50)),
  ('George', datetime(2020, 3, 24, 3, 19, 58), datetime(2020, 3, 24, 3, 22, 45)),
  ('Andrew', datetime(2019, 12, 12, 17, 21, 30), datetime(2019, 9, 19, 1, 14, 11)),
  ('Micheal', datetime(2018, 11, 22, 13, 29, 40), datetime(2018, 8, 19, 7, 11, 37)),
  ('Maggie', datetime(2019, 2, 8, 3, 31, 23), datetime(2018, 2, 19, 6, 11, 42)),
  ('Ravi', datetime(2019, 1, 1, 4, 19, 47), datetime(2019, 1, 1, 4, 22, 17)),
  ('Xien', datetime(2020, 3, 2, 4, 33, 51), datetime(2020, 6, 21, 7, 11, 11)),
  ('George', datetime(2020, 3, 24, 3, 19, 58), datetime(2020, 4, 24, 3, 22, 54)),
  ('Andrew', datetime(2019, 12, 12, 17, 21, 30), datetime(2019, 8, 30, 3, 12, 41)),
  ('Micheal', datetime(2018, 11, 22, 13, 29, 40), datetime(2017, 5, 17, 8, 10, 38)),
  ('Maggie', datetime(2019, 2, 8, 3, 31, 23), datetime(2020, 3, 19, 6, 11, 12)),
  ('Ravi', datetime(2019, 1, 1, 4, 19, 47), datetime(2018, 2, 1, 4, 22, 24)),
  ('Xien', datetime(2020, 3, 2, 4, 33, 51), datetime(2018, 9, 21, 7, 11, 41)),
]
 
df = sqlContext.createDataFrame(data, ['name', 'trial_start', 'purchase'])
df.show(truncate=False)

I order the data by name and then purchase

df.orderBy("name","purchase").show()

to produce the result:

+-------+-------------------+-------------------+
|   name|        trial_start|           purchase|
+-------+-------------------+-------------------+
| Andrew|2019-12-12 22:21:30|2019-07-21 06:14:22|
| Andrew|2019-12-12 22:21:30|2019-08-30 07:12:41|
| Andrew|2019-12-12 22:21:30|2019-09-19 05:14:11|
| George|2020-03-24 07:19:58|2018-02-24 08:22:55|
| George|2020-03-24 07:19:58|2020-03-24 07:22:45|
| George|2020-03-24 07:19:58|2020-04-24 07:22:54|
| Maggie|2019-02-08 08:31:23|2018-02-19 11:11:42|
| Maggie|2019-02-08 08:31:23|2019-05-19 10:11:33|
| Maggie|2019-02-08 08:31:23|2020-03-19 10:11:12|
|Micheal|2018-11-22 18:29:40|2017-05-17 12:10:38|
|Micheal|2018-11-22 18:29:40|2018-05-17 12:10:19|
|Micheal|2018-11-22 18:29:40|2018-08-19 11:11:37|
|   Ravi|2019-01-01 09:19:47|2018-02-01 09:22:24|
|   Ravi|2019-01-01 09:19:47|2019-01-01 09:22:17|
|   Ravi|2019-01-01 09:19:47|2019-01-01 09:22:55|
|   Xien|2020-03-02 09:33:51|2018-09-21 11:11:41|
|   Xien|2020-03-02 09:33:51|2020-05-21 11:11:50|
|   Xien|2020-03-02 09:33:51|2020-06-21 11:11:11|
+-------+-------------------+-------------------+

How might I get the second row for each name? In pandas it was easy. I could just use nth. I have been looking at sql but have not found a solution. Any suggestions appreciated.

The output I am looking for would be:

+-------+-------------------+-------------------+
|   name|        trial_start|           purchase|
+-------+-------------------+-------------------+
| Andrew|2019-12-12 22:21:30|2019-08-30 07:12:41|
| George|2020-03-24 07:19:58|2020-03-24 07:22:45|
| Maggie|2019-02-08 08:31:23|2019-05-19 10:11:33|
|Micheal|2018-11-22 18:29:40|2018-05-17 12:10:19|
|   Ravi|2019-01-01 09:19:47|2019-01-01 09:22:17|
|   Xien|2020-03-02 09:33:51|2020-05-21 11:11:50|
+-------+-------------------+-------------------+

Upvotes: 0

Views: 6142

Answers (1)

notNull
notNull

Reputation: 31490

Try with window row_number() function then filter only the 2 row after ordering by purchase.

Example:

from pyspark.sql import *
from pyspark.sql.functions import *

w=Window.partitionBy("name").orderBy(col("purchase"))

df.withColumn("rn",row_number().over(w)).filter(col("rn") ==2).drop(*["rn"]).show()

SQL Api:

df.createOrReplaceTempView("tmp")

spark.sql("SET spark.sql.parser.quotedRegexColumnNames=true")

sql("select `(rn)?+.+` from (select *,row_number() over(partition by name order by purchase) rn from tmp) e where rn =2").\
show()

Upvotes: 2

Related Questions