sam8686
sam8686

Reputation: 37

Spark scala create multiple columns from array column

Creating a multiple columns from array column

Dataframe

Car name |  details
Toyota   | [[year,2000],[price,20000]]
Audi     | [[mpg,22]]

Expected dataframe

Car name | year | price | mpg
Toyota   | 2000 | 20000 | null
Audi     | null | null | 22

Upvotes: 0

Views: 2162

Answers (2)

Sarath Chandra Vema
Sarath Chandra Vema

Reputation: 812

You can try this

Let's define the data

scala> val carsDF = Seq(("toyota",Array(("year", 2000), ("price", 100000))), ("Audi", Array(("mpg", 22)))).toDF("car", "details")
carsDF: org.apache.spark.sql.DataFrame = [car: string, details: array<struct<_1:string,_2:int>>]

scala> carsDF.show(false)
+------+-----------------------------+
|car   |details                      |
+------+-----------------------------+
|toyota|[[year,2000], [price,100000]]|
|Audi  |[[mpg,22]]                   |
+------+-----------------------------+

Splitting the data & accessing the values in the data

scala> carsDF.withColumn("split", explode($"details")).withColumn("col", $"split"("_1")).withColumn("val", $"split"("_2")).select("car", "col", "val").show
+------+-----+------+
|   car|  col|   val|
+------+-----+------+
|toyota| year|  2000|
|toyota|price|100000|
|  Audi|  mpg|    22|
+------+-----+------+

Define the list of columns that are required

scala> val colNames = Seq("mpg", "price", "year", "dummy")
colNames: Seq[String] = List(mpg, price, year, dummy)

Use pivoting on the above defined column names gives required output. By giving new column names in the sequence makes it a single point input

scala> weDF.groupBy("car").pivot("col", colNames).agg(avg($"val")).show
+------+----+--------+------+-----+
|   car| mpg|   price|  year|dummy|
+------+----+--------+------+-----+
|toyota|null|100000.0|2000.0| null|
|  Audi|22.0|    null|  null| null|
+------+----+--------+------+-----+

This seems more elegant & easy way to achieve the output

Upvotes: 1

firsni
firsni

Reputation: 906

you can do it like that

import org.apache.spark.functions.col
val df: DataFrame = Seq(
  ("toyota",Array(("year", 2000), ("price", 100000))),
  ("toyota",Array(("year", 2001)))
).toDF("car", "details")

 +------+-------------------------------+
 |car   |details                        |
 +------+-------------------------------+
 |toyota|[[year, 2000], [price, 100000]]|
 |toyota|[[year, 2001]]                 |
 +------+-------------------------------+

val newdf = df
  .withColumn("year", when(col("details")(0)("_1") === lit("year"), col("details")(0)("_2")).otherwise(col("details")(1)("_2")))
  .withColumn("price", when(col("details")(0)("_1") === lit("price"), col("details")(0)("_2")).otherwise(col("details")(1)("_2")))
  .drop("details")

newdf.show()
  +------+----+------+
  |   car|year| price|
  +------+----+------+
  |toyota|2000|100000|     
  |toyota|2001|  null|
  +------+----+------+

Upvotes: 1

Related Questions