user11751463
user11751463

Reputation:

Migrating window functions from SQL to spark scala

Here's some SQL expression that I'm trying to migrate to spark scala.

SELECT
 a.senderId,
 b.company_id,
 ROW_NUMBER() OVER(PARTITION BY a.senderId ORDER BY b.chron_rank) AS rnk
FROM df1 a
JOIN df2 b
ON a.senderId = b.member_id
WHERE a.datepartition BETWEEN concat(b.start_date,'-00') AND concat(b.end_date,'-00') 

I'm a little lost with the window function, I started something like this,

val temp = df2.join(df1, $"dimPosition.member_id" === $"df1.senderId")
    .select($"df1.senderId", $"df2.company_id")
    .......

Upvotes: 0

Views: 80

Answers (2)

HimanshuSPaul
HimanshuSPaul

Reputation: 316

Try this .. may be you will face issue for where clause ..

val temp = df2.join(df1, $"dimPosition.member_id" === $"df1.senderId")
  .select($"df1.senderId", $"df2.company_id")
  .withColumn('rnk', ROW_NUMBER() OVER Window.partitionBy("senderId",")
  .orderBy("chron_rank"))
  .where(datepartition BETWEEN concat(b.start_date,'-00') AND concat(b.end_date,'-00'))

Upvotes: 0

Som
Som

Reputation: 6323

Try this-

df2.as("b")
      .join(df1.as("a"), $"a.senderId" === $"b.member_id" && $"a.datepartition".between(
        concat($"b.start_date",lit("-00")), concat($"b.end_date", lit("-00")))
      )
      .selectExpr("a.senderId",
        "b.company_id",
        "ROW_NUMBER() OVER(PARTITION BY a.senderId ORDER BY b.chron_rank) AS rnk")

Upvotes: 1

Related Questions