Reputation: 405
I am having some troubles with window functions. I could not really find any example that would cover scenarios where the order matters. What I want to do, is to rank over ColumnA, taking SortOrder(and their first occurrence) into account. So all of the B would get value 1, A 2 and C 3. Can I achieve it with the rank function? I cannot simply order by those two columns.
example = example.withColumn("rank", F.rank().over(Window.orderBy('ColumnA')))
This one would not work either, since the order would be lost.
from pyspark.sql.types import StructType, StructField, StringType, IntegerType
import pyspark.sql.functions as F
from pyspark.sql.window import Window
data = [("B", "BA", 1),
("B", "BB", 2),
("B", "BC", 3),
("A", "AA", 4),
("A", "AB", 5),
("C", "CA", 6),
("A", "AC", 7)]
cols = ['ColumnA', 'ColumnB', 'SortOrder']
schema = StructType([StructField('ColumnA', StringType(), True),
StructField('ColumnB', StringType(), True),
StructField('SortOrder', IntegerType(), True)])
rdd = sc.parallelize(data)
example = spark.createDataFrame(rdd, schema)
?
example = example.withColumn("rank", F.rank().over(Window.orderBy('SortOrder', 'ColumnA')))
Upvotes: 0
Views: 2599
Reputation: 42422
Get the minimum SortOrder for each ColumnA value, then get the rank, and join it back to the original dataframe.
example2 = example.join(
example.groupBy('ColumnA')
.min('SortOrder')
.select('ColumnA',
F.rank().over(Window.orderBy('min(SortOrder)')).alias('rank')
),
on = 'ColumnA'
).orderBy('SortOrder')
example2.show()
+-------+-------+---------+----+
|ColumnA|ColumnB|SortOrder|rank|
+-------+-------+---------+----+
| B| BA| 1| 1|
| B| BB| 2| 1|
| B| BC| 3| 1|
| A| AA| 4| 2|
| A| AB| 5| 2|
| C| CA| 6| 3|
| A| AC| 7| 2|
+-------+-------+---------+----+
Upvotes: 1