Varun Chadha
Varun Chadha

Reputation: 376

How to pivot dataset?

I use Spark 2.1.

I have some data in a Spark Dataframe, which looks like below:

**ID** **type** **val** 
  1       t1       v1      
  1       t11      v11
  2       t2       v2     

I want to pivot up this data using either spark Scala (preferably) or Spark SQL so that final output should look like below:

**ID**   **t1**   **t11**   **t2**
  1        v1       v11
  2                            v2

Upvotes: 2

Views: 1928

Answers (3)

akuiper
akuiper

Reputation: 215137

You can use groupBy.pivot:

import org.apache.spark.sql.functions.first
df.groupBy("ID").pivot("type").agg(first($"val")).na.fill("").show
+---+---+---+---+
| ID| t1|t11| t2|
+---+---+---+---+
|  1| v1|v11|   |
|  2|   |   | v2|
+---+---+---+---+

Note: depending on the actual data, i.e. how many values there are for each combination of ID and type, you might choose a different aggregation function.

Upvotes: 2

Leo C
Leo C

Reputation: 22449

Here's one way to do it:

val df = Seq(
  (1, "T1", "v1"),
  (1, "T11", "v11"),
  (2, "T2", "v2")
).toDF(
  "id", "type", "val"
).as[(Int, String, String)]

val df2 = df.groupBy("id").pivot("type").agg(concat_ws(",", collect_list("val")))

df2.show
+---+---+---+---+
| id| T1|T11| T2|
+---+---+---+---+
|  1| v1|v11|   |
|  2|   |   | v2|
+---+---+---+---+

Note that if there are different vals associated with a given type, they will be grouped (comma-delimited) under the type in df2.

Upvotes: 2

m-bhole
m-bhole

Reputation: 1189

This one should work

val seq = Seq((123,"2016-01-01","1"),(123,"2016-01-02","2"),(123,"2016-01-03","3"))
val seq = Seq((1,"t1","v1"),(1,"t11","v11"),(2,"t2","v2"))     
val df = seq.toDF("id","type","val")
val pivotedDF = df.groupBy("id").pivot("type").agg(first("val"))
pivotedDF.show

Output:

+---+----+----+----+
| id|  t1| t11|  t2|
+---+----+----+----+
|  1|  v1| v11|null|
|  2|null|null|  v2|
+---+----+----+----+

Upvotes: 1

Related Questions