Reputation: 3855
I have a dataframe with an initial status named init. I have a dataframe with the same schema where it has updates for one field of dataframe init per row and Null in other fields. How can I reconstruct each record applying the changes consecutively? To be more clear lets have this example:
listOfTuples = [(101, "Status_0", '2019','value_col_4',0)]
init = spark.createDataFrame(listOfTuples , ["id", "status", "year","col_4","ord"])
#initial state
>>> init.show()
+---+--------+----+-----------+---+
| id| status|year| col_4|ord|
+---+--------+----+-----------+---+
| 1|Status_0|2019|value_col_4| 0|
+---+--------+----+-----------+---+
#dataframe with changes
schema = StructType([StructField('id', StringType(), True),
StructField('status', StringType(), True),
StructField('year', StringType(), True),
StructField('col_4', StringType(), True),
StructField('ord', IntegerType(), True)])
listOfTuples = [(1, "Status_A", None, None,1),
(1, "Status_B", None, None,2),
(1, None, None, "new_val", 3),
(1, "Status_C", None, None,4)]
changes = spark.createDataFrame(listOfTuples , schema)
>>> changes.show()
+---+--------+----+-------+---+
| id| status|year| col_4|ord|
+---+--------+----+-------+---+
| 1|Status_A|null| null| 1|
| 1|Status_B|null| null| 2|
| 1| null|null|new_val| 3|
| 1|Status_C|null| null| 4|
+---+--------+----+-------+---+
I want the changes to be applied in final dataframe consecutively with the order of ord column and baseline the values in dataframe init. So I want my final dataframe to be like:
>>> final.show()
+---+--------+----+--------------+
| id| status|year| col_4 |
+---+--------+----+--------------+
| 1|Status_0|2019| value_col_4 |
| 1|Status_A|2019| value_col_4 |
| 1|Status_B|2019| value_col_4 |
| 1|Status_B|2019| new_val |
| 1|Status_C|2019| new_val |
+---+--------+----+--------------+
I was thinking about unioning the two dataframes sort by ord column and then propagate changes somehow below. Has anyone any idea how to do this?
Upvotes: 0
Views: 707
Reputation: 3855
In python using the code from @C.S.Reddy Gadipally
import pyspark.sql.functions as f
from pyspark.sql.window import Window
f = init.union(changes)
w = Window.partitionBy(f['id']).orderBy(f['ord'])
for c in f.columns[1:]:
f = f.withColumn(c,func.last(c,True).over(w))
Upvotes: 2
Reputation: 1758
It is Scala code, but I hope this helps. You may drop or rename the columns in the end.
Solution is to do a union
and then get the org.apache.spark.sql.functions.last
not null value with in a frame of unboundedpreceding
rows to currentrow
for all the 3 columns.
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.expressions.WindowSpec
import org.apache.spark.sql.functions._
scala> initial.show
+---+--------+----+-----------+---+
| id| status|year| col_4|ord|
+---+--------+----+-----------+---+
| 1|Status_0|2019|value_col_4| 0|
+---+--------+----+-----------+---+
scala> changes.show
+---+--------+----+-------+---+
| id| status|year| col_4|ord|
+---+--------+----+-------+---+
| 1|Status_A|null| null| 1|
| 1|Status_B|null| null| 2|
| 1| null|null|new_val| 3|
| 1|Status_C|null| null| 4|
+---+--------+----+-------+---+
scala> val inter = initial.union(changes)
inter: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [id: string, status: string ... 3 more fields]
scala> inter.show
+---+--------+----+-----------+---+
| id| status|year| col_4|ord|
+---+--------+----+-----------+---+
| 1|Status_0|2019|value_col_4| 0|
| 1|Status_A|null| null| 1|
| 1|Status_B|null| null| 2|
| 1| null|null| new_val| 3|
| 1|Status_C|null| null| 4|
+---+--------+----+-----------+---+
scala> val overColumns = Window.partitionBy("id").orderBy("ord").rowsBetween(Window.unboundedPreceding, Window.currentRow)
overColumns: org.apache.spark.sql.expressions.WindowSpec = org.apache.spark.sql.expressions.WindowSpec@70f4b378
scala> val output = inter.withColumn("newstatus",
last("status", true).over(overColumns)).withColumn("newyear",
last("year", true).over(overColumns)).withColumn("newcol_4",
last("col_4", true).over(overColumns))
output: org.apache.spark.sql.DataFrame = [id: string, status: string ... 6 more fields]
scala> output.show(false)
+---+--------+----+-----------+---+---------+-------+-----------+
|id |status |year|col_4 |ord|newstatus|newyear|newcol_4 |
+---+--------+----+-----------+---+---------+-------+-----------+
|1 |Status_0|2019|value_col_4|0 |Status_0 |2019 |value_col_4|
|1 |Status_A|null|null |1 |Status_A |2019 |value_col_4|
|1 |Status_B|null|null |2 |Status_B |2019 |value_col_4|
|1 |null |null|new_val |3 |Status_B |2019 |new_val |
|1 |Status_C|null|null |4 |Status_C |2019 |new_val |
+---+--------+----+-----------+---+---------+-------+-----------+
Upvotes: 2