mamatv
mamatv

Reputation: 3661

pyspark: Auto filling in implicit missing values

I have a dataframe

user day amount
a 2 10
a 1 14
a 4 5
b 1 4

You see that, the maximum value of day is 4, and the minimum value is 1. I want to fill 0 for amount column in all missing days of all users, so the above data frame will become.

user day amount
    a 2 10
    a 1 14
    a 4 5
    a 3 0
    b 1 4
    b 2 0
    b 3 0
    b 4 0

How could I do that in PySpark? Many thanks.

Upvotes: 2

Views: 344

Answers (2)

murtihash
murtihash

Reputation: 8410

Another way to do this is to use sequence, array functions and explode. (spark2.4+)

from pyspark.sql import functions as F
from pyspark.sql.window import Window

w=Window().partitionBy(F.lit(0))

df.withColumn("boundaries", F.sequence(F.min("day").over(w),F.max("day").over(w),F.lit(1)))\
  .groupBy("user").agg(F.collect_list("day").alias('day'),F.collect_list("amount").alias('amount')\
   ,F.first("boundaries").alias("boundaries")).withColumn("boundaries", F.array_except("boundaries","day"))\
  .withColumn("day",F.flatten(F.array("day","boundaries"))).drop("boundaries")\
  .withColumn("zip", F.explode(F.arrays_zip("day","amount")))\
  .select("user","zip.day", F.when(F.col("zip.amount").isNull(),\
                                   F.lit(0)).otherwise(F.col("zip.amount")).alias("amount")).show()
#+----+---+------+
#|user|day|amount|
#+----+---+------+
#|   a|  2|    10|
#|   a|  1|    14|
#|   a|  4|     5|
#|   a|  3|     0|
#|   b|  1|     4|
#|   b|  2|     0|
#|   b|  3|     0|
#|   b|  4|     0|
#+----+---+------+

Upvotes: 2

anky
anky

Reputation: 75120

Here is one approach. You can get the min and max values first , then group on user column and pivot, then fill in missing columns and fill all nulls as 0, then stack them back:

min_max = df.agg(F.min("day"),F.max("day")).collect()[0]
df1 = df.groupBy("user").pivot("day").agg(F.first("amount").alias("amount")).na.fill(0)

missing_cols = [F.lit(0).alias(str(i)) for i in range(min_max[0],min_max[1]+1) 
                                                if str(i) not in df1.columns ]
df1 = df1.select("*",*missing_cols)

#+----+---+---+---+---+
#|user|  1|  2|  4|  3|
#+----+---+---+---+---+
#|   b|  4|  0|  0|  0|
#|   a| 14| 10|  5|  0|
#+----+---+---+---+---+

#the next step is inspired from https://stackoverflow.com/a/37865645/9840637
arr = F.explode(F.array([F.struct(F.lit(c).alias("day"), F.col(c).alias("amount"))
                                           for c in df1.columns[1:]])).alias("kvs")
(df1.select(["user"] + [arr])
    .select(["user"]+ ["kvs.day", "kvs.amount"]).orderBy("user")).show()

+----+---+------+
|user|day|amount|
+----+---+------+
|   a|  1|    14|
|   a|  2|    10|
|   a|  4|     5|
|   a|  3|     0|
|   b|  1|     4|
|   b|  2|     0|
|   b|  4|     0|
|   b|  3|     0|
+----+---+------+

Note, since column day was pivotted , the dtype might have changed so you may have to cast them back to the original dtype

Upvotes: 4

Related Questions