Chandan Bhattad
Chandan Bhattad

Reputation: 371

Apache Spark - Transformation - Row values as column headers - Pivot

I have a dataset like below (id, date, price)

 - 1, 2017-01-10, 100
 - 1, 2017-01-11, 110
 - 2, 2017-01-10, 100
 - 2, 2017-01-12, 120

I need the result in following

pidx/date : 2017-01-10  2017-01-11 2017-01-12
1:           100         110         -
2:           100         -          120

Which transformations will result in the above output

Upvotes: 2

Views: 251

Answers (1)

koiralo
koiralo

Reputation: 23109

You can use pivot with groupBy to get the output

import spark.implicits._

//dummy data 
val df = Seq(
  (1, "2017-01-10", 100),
  (1, "2017-01-11", 110),
  (2, "2017-01-10", 100),
  (2, "2017-01-12", 120)
).toDF("id", "date", "price")

//first groupBy id and pivot the date and calculate the sum 
val resultDF = df.groupBy("id").pivot("date").agg(sum("price"))

resultDF.show()

Output:

+---+----------+----------+----------+
|id |2017-01-10|2017-01-11|2017-01-12|
+---+----------+----------+----------+
| 1 |100       |110       |null      |
| 2 |100       |null      |120       |
+---+----------+----------+----------+

Upvotes: 3

Related Questions