Reputation:
Here's some SQL expression that I'm trying to migrate to spark scala.
SELECT
a.senderId,
b.company_id,
ROW_NUMBER() OVER(PARTITION BY a.senderId ORDER BY b.chron_rank) AS rnk
FROM df1 a
JOIN df2 b
ON a.senderId = b.member_id
WHERE a.datepartition BETWEEN concat(b.start_date,'-00') AND concat(b.end_date,'-00')
I'm a little lost with the window function, I started something like this,
val temp = df2.join(df1, $"dimPosition.member_id" === $"df1.senderId")
.select($"df1.senderId", $"df2.company_id")
.......
Upvotes: 0
Views: 80
Reputation: 316
Try this .. may be you will face issue for where clause ..
val temp = df2.join(df1, $"dimPosition.member_id" === $"df1.senderId")
.select($"df1.senderId", $"df2.company_id")
.withColumn('rnk', ROW_NUMBER() OVER Window.partitionBy("senderId",")
.orderBy("chron_rank"))
.where(datepartition BETWEEN concat(b.start_date,'-00') AND concat(b.end_date,'-00'))
Upvotes: 0
Reputation: 6323
Try this-
df2.as("b")
.join(df1.as("a"), $"a.senderId" === $"b.member_id" && $"a.datepartition".between(
concat($"b.start_date",lit("-00")), concat($"b.end_date", lit("-00")))
)
.selectExpr("a.senderId",
"b.company_id",
"ROW_NUMBER() OVER(PARTITION BY a.senderId ORDER BY b.chron_rank) AS rnk")
Upvotes: 1