Reputation: 303
My spark data looks like -
area product score
a aa .39
a bb .03
a cc 1.1
a dd .5
b ee .02
b aa 1.2
b mm .5
b bb 1.3
I want top 3 product area wise ranking based on score variable. My final output should be
area product score rank
a cc 1.1 1
a dd .5 2
a a .39 3
b bb 1.3 1
b aa 1.2 2
b mm .5 3
How to do it in PySpark?
I have done so far -
from pyspark.sql import Window
import pyspark.sql.functions as psf
wA = Window.orderBy(psf.desc("score"))
df = df.withColumn(
"rank",
psf.dense_rank().over(wA))
But not working for me.
Upvotes: 1
Views: 389
Reputation: 319
Partition by area
and filter rank<=3
will give the results
import pyspark.sql.functions as psf
from pyspark.sql import SparkSession
from pyspark.sql.window import Window
spark = SparkSession.builder.appName("Test").master("local[*]") \
.getOrCreate()
df = spark.createDataFrame([('a', 'aa', .39),
('a', 'bb', .03),
('a', 'cc', 1.1),
('a', 'dd', .5),
('b', 'ee', .02),
('b', 'aa', 1.2),
('b', 'mm', .5),
('b', 'bb', 1.3)],
['area', 'product', 'score'])
wA = Window.partitionBy("area").orderBy(psf.desc("score"))
df = df.withColumn("rank",
psf.dense_rank().over(wA))
df.filter("rank<=3").show()
Upvotes: 3