Chaouki
Chaouki

Reputation: 465

How to get the row from a dataframe that has the maximum value in a specific column?

I have a dataframe like this

df.show(5)
 kv   |list1     |list2                |p
[k1,v2|[1,2,5,9  |[5,1,7,9,6,3,1,4,9]  |0.5
[k1,v3|[1,2,5,8,9|[5,1,7,9,6,3,1,4,15] |0.9
[k2,v2|[77,2,5,9]|[0,1,8,9,7,3,1,4,100]|0.01
[k5,v5|[1,0,5,9  |[5,1,7,9,6,3,1,4,3]  |0.3
[k9,v2|[1,2,5,9  |[5,1,7,9,6,3,1,4,200]|2.5

df.count()
5200158

I want to get the row that have maximum p, this below works for me but I don't know if there is another cleaner way

val f = df.select(max(struct(
    col("pp") +: df.columns.collect { case x if x != "p" => col(x) }: _*
))).first()

Upvotes: 3

Views: 13564

Answers (2)

user9583019
user9583019

Reputation: 1

Just order by and then take:

import org.apache.spark.sql.functions.desc

df.orderBy(desc("pp")).take(1)

or

df.orderBy(desc("pp")).limit(1).first

Upvotes: 8

Raphael Roth
Raphael Roth

Reputation: 27383

You can also use Window-Functions, this is especially useful if the logic of selecting the row gets more complex (other than global min/max) :

import  org.apache.spark.sql.expressions.Window

df
  .withColumn("max_p",max($"p").over(Window.partitionBy()))
  .where($"p" === $"max_p")
  .drop($"max_p")
  .first()

Upvotes: 4

Related Questions