Mauro Gentile
Mauro Gentile

Reputation: 1511

conditional aggragation in pySpark groupby

Easy question from a newbie in pySpark: I have a df and I would like make a conditional aggragation, returning the aggregation result if denominator is different than 0 otherwise 0.

My tentative produces an error:

groupBy=["K"]
exprs=[(sum("A")+(sum("B"))/sum("C") if sum("C")!=0 else 0 ]
grouped_df=new_df.groupby(*groupBy).agg(*exprs)

Any hint?

Thank you

Upvotes: 2

Views: 8467

Answers (1)

MaFF
MaFF

Reputation: 10086

You have to use when/otherwise for if/else:

import pyspark.sql.functions as psf
new_df.groupby("K").agg(
    psf.when(psf.sum("C")==0, psf.lit(0)).otherwise((psf.sum("A") + psf.sum("B"))/psf.sum("C")).alias("sum")
)

But you can also do it this way:

import pyspark.sql.functions as psf
new_df.groupby("K").agg(
    ((psf.sum("A") + psf.sum("B"))/psf.sum("C")).alias("sum")
).na.fill({"sum": 0})

Upvotes: 10

Related Questions