Haha
Haha

Reputation: 1009

Get a row after each time a certain column changes

+---------------+-------+-------+-------------------+
|ID_NOTIFICATION|CD_ETAT|TYP_MVT|DT_FIN             |
+---------------+-------+-------+-------------------+
|3111341        |AT     |C      |2019-06-12 00:03:37|
|3111341        |AN     |M      |2019-06-12 15:08:43|
|3111341        |AN     |M      |2019-06-12 15:10:11|
|3111341        |AN     |M      |2019-06-12 15:10:50|
|3111341        |AN     |M      |2019-06-12 15:11:34|
|3111341        |AN     |M      |2019-06-12 15:12:03|
|3111341        |AN     |M      |2019-06-12 15:14:04|
|3111341        |AN     |M      |2019-06-12 15:14:40|
|3111341        |AN     |M      |2019-06-12 15:15:22|
|3111341        |AN     |M      |2019-06-12 15:15:57|
|3111341        |AN     |M      |2019-06-12 15:25:28|
|3111341        |AN     |M      |2019-06-12 15:25:29|
|3111341        |AN     |M      |2019-06-12 15:27:50|
|3111341        |AN     |M      |2019-06-12 15:28:37|
|3111341        |AN     |M      |2019-06-12 15:32:22|
|3111341        |AN     |M      |2019-06-12 15:32:59|
|3111341        |EC     |M      |2019-06-12 15:33:04|
|3111341        |AN     |M      |2019-06-13 00:04:33|
|3111341        |TE     |M      |9999-01-01 00:00:00|
+---------------+-------+-------+-------------------+

I need to extract one row from the above dataframe after each change on CD_ETAT.

This:

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._

val window = Window.partitionBy("CD_ETAT").orderBy("DT_ETAT")

df.withColumn("row_num", row_number().over(window))
  .filter($"row_num" === 1)
  .drop("row_num")

seemed to work but actually not because it gets only one row from each CD_ETAT. For the above example it gives:

+---------------+-------+-------+-------------------+
|ID_NOTIFICATION|CD_ETAT|TYP_MVT|DT_FIN             |
+---------------+-------+-------+-------------------+
|3111341        |EC     |M      |2019-06-12 15:33:04|
|3111341        |AN     |M      |2019-06-13 00:04:33|
|3111341        |TE     |M      |9999-01-01 00:00:00|
|3111341        |AT     |C      |2019-06-12 00:03:37|
+---------------+-------+-------+-------------------+

But a correct Output would also include the second line of the input dataframe.

I want one row in output after each change on CD_ETAT in input.

Thank you.

Upvotes: 0

Views: 91

Answers (1)

belka
belka

Reputation: 1530

The idea: for each line, you need the previous CD_ETAT. You may do that using a window function or by self-joining or by transforming manually to an RDD and fetching the previous value of the row.

Self-join:

val window = Window.partitionBy("ID_NOTIFICATION").orderBy("DT_ETAT")

val df2 = df.withColumn("row_num", row_number().over(window))

df2
  .join(df2, col("row_num") === col("row_num")-lit(1))
  .filter(col("etat_before") != col("etat_after"))
  .select(...)

You just have to do a little bit of renaming to differentiate the two dataframes (given the columns have the same names) but you have the idea.

Window function:

df
  .withColumn("PREV_ETAT", lag($"CD_ETAT", 1).over(window))
  .filter(col("PREV_ETAT") != col("CD_ETAT"))
  .select("ID_NOTIFICATION", "CD_ETAT", "TYP_MVT", "DT_FIN")

With RDDs:

case class LineBefore(ID_NOTIFICATION: Int, CD_ETAT: String, TYP_MVT: String, DT_FIN: Date)
case class LineAfter(ID_NOTIFICATION: Int, CD_ETAT: String, TYP_MVT: String, DT_FIN: Date, PREV_ETAT: String)

df
  .as[LineBefore]
  .rdd
  .groupBy(_.ID_NOTIFICATION)
  .orderBy(_.DT_FIN)
  .flatMap { case (id, events) =>

     var prev_etat = null
     var etat_changed = true

     events.map { e =>
       etat_changed = prev_etat != e.CD_ETAT

       if (etat_changed)
         Some(LineAfter(e.ID_NOTIFICATION, e.CD_ETAT, e.TYP_MVT, e.DT_FIN, prev_etat)
       else
         None

       prev_etat = e.CD_ETAT
     }
   }.filter(_.isDefined).map(_.get)

Hope this helps. Do not hesitate to reach out if this doesn't or to accept the answer otherwise.

Upvotes: 1

Related Questions