Reputation: 1939
I have a dataframe like this:
test = spark.createDataFrame(
[
(1, 0, 100),
(2, 0, 200),
(3, 1, 150),
(4, 1, 250),
],
['id', 'flag', 'col1']
)
I would like to create another column and input the average of the groupby of the flag
test.groupBy(f.col('flag')).agg(f.avg(f.col("col1"))).show()
+----+---------+
|flag|avg(col1)|
+----+---------+
| 0| 150.0|
| 1| 200.0|
+----+---------+
End product:
+---+----+----+---+
| id|flag|col1|avg|
+---+----+----+---+
| 1| 0| 100|150|
| 2| 0| 200|150|
| 3| 1| 150|200|
| 4| 1| 250|200|
+---+----+----+---+
Upvotes: 1
Views: 2677
Reputation: 3419
You can use the window
function:
from pyspark.sql.window import Window
from pyspark.sql import functions as F
w = Window.partitionBy('flag')
test.withColumn("avg", F.avg("col1").over(w)).show()
+---+----+----+-----+
| id|flag|col1| avg|
+---+----+----+-----+
| 1| 0| 100|150.0|
| 2| 0| 200|150.0|
| 3| 1| 150|200.0|
| 4| 1| 250|200.0|
+---+----+----+-----+
Upvotes: 3