LKA
LKA

Reputation: 553

Pyspark: Delete rows on column condition after groupBy

This is my input dataframe:

id val    
1  Y
1  N
2  a
2  b
3  N

Result should be:

id val    
1  Y     
2  a    
2  b
3  N

I want to group by on col id which has both Y and N in the val and then remove the row where the column val contains "N". Please help me resolve this issue as i am beginner to pyspark

Upvotes: 2

Views: 1758

Answers (1)

gaw
gaw

Reputation: 1960

you can first identify the problematic rows with a filter for val=="Y" and then join this dataframe back to the original one. Finally you can filter for Null values and for the rows you want to keep, e.g. val==Y. Pyspark should be able to handle the self-join even if there are a lot of rows. The example is shown below:

df_new = spark.createDataFrame([
(1, "Y"), (1, "N"), (1,"X"), (1,"Z"),
(2,"a"), (2,"b"), (3,"N")
], ("id", "val"))

df_Y = df_new.filter(col("val")=="Y").withColumnRenamed("val","val_Y").withColumnRenamed("id","id_Y")
df_new = df_new.join(df_Y, df_new["id"]==df_Y["id_Y"],how="left")
df_new.filter((col("val_Y").isNull()) | ((col("val_Y")=="Y") & ~(col("val")=="N"))).select("id","val").show()

The result would be your preferred:

+---+---+
| id|val|
+---+---+
|  1|  X|
|  1|  Y|
|  1|  Z|
|  3|  N|
|  2|  a|
|  2|  b|
+---+---+

Upvotes: 5

Related Questions