user2253546
user2253546

Reputation: 595

Pyspark --- adding new column with values per group by

Suppose I have the following data set:

a | b   
1 | 0.4 
1 | 0.8 
1 | 0.5 
2 | 0.4
2 | 0.1

I would like to add a new column called "label" where the values are determined locally for each group of values in a. The highest value of b in a group a is labeled 1 and all others are labeled 0.

The output would look like this :

a | b   | label
1 | 0.4 | 0
1 | 0.8 | 1
1 | 0.5 | 0
2 | 0.4 | 1
2 | 0.1 | 0

How can I do this efficiently using PySpark?

Upvotes: 3

Views: 9329

Answers (1)

zero323
zero323

Reputation: 330063

You can do it with window functions. First you'll need a couple of imports:

from pyspark.sql.functions import desc, row_number, when
from pyspark.sql.window import Window

and window definition:

w = Window().partitionBy("a").orderBy(desc("b"))

Finally you use these:

df.withColumn("label", when(row_number().over(w) == 1, 1).otherwise(0))

For example data:

df = sc.parallelize([
    (1, 0.4), (1, 0.8), (1, 0.5), (2, 0.4), (2, 0.1)
]).toDF(["a", "b"])

the result is:

+---+---+-----+
|  a|  b|label|
+---+---+-----+
|  1|0.8|    1|
|  1|0.5|    0|
|  1|0.4|    0|
|  2|0.4|    1|
|  2|0.1|    0|
+---+---+-----+

Upvotes: 7

Related Questions