irrelevantUser
irrelevantUser

Reputation: 1322

Spark Dataset Join and Aggregate columns

I have three Spark Datasets of the same type A

case class A(col_a: String, col_b: Int, col_c: Int, col_d: Int, col_e: Int)

val ds_one = Dataset[A](Seq(a, 12, 0, 0, 0), Seq(b, 11, 0, 0, 0))
val ds_two = Dataset[A](Seq(a, 0, 16, 0, 0),  Seq(b, 0, 73, 0, 0))
val ds_three = Dataset[A](Seq(a, 0, 0, 9, 0),  Seq(b, 0, 0, 64, 0))

How do I reduce the three datasets into one Dataset[A]:

ds_combined = Dataset[A](Seq(a,12,16,9,0), Seq(b,11,73,64,0))

Upvotes: 1

Views: 333

Answers (1)

uh_big_mike_boi
uh_big_mike_boi

Reputation: 3470

It looks like you are just grouping by col_a and getting the max

import org.apache.spark.sql.expressions._
import org.apache.spark.sql.functions._
case class A(col_a: String, col_b: Int, col_c: Int, col_d: Int, col_e: Int)

val ds_one = Seq(A("a", 12, 0, 0, 0), A("b", 11, 0, 0, 0)).toDS
val ds_two = Seq(A("a", 0, 16, 0, 0), A("b", 0, 73, 0, 0)).toDS
val ds_three = Seq(A("a", 0, 0, 9, 0), A("b", 0, 0, 64, 0)).toDS

val ds_union = ds_one.union(ds_two).union(ds_three)
val ds_combined = ds_union
  .groupBy("col_a")
  .agg(max("col_b").alias("col_b")
    , max("col_c").alias("col_c")
    , max("col_d").alias("col_d")
    , max("col_e").alias("col_e"))
  .as[A]



ds_combined.show

ds_combined:org.apache.spark.sql.Dataset[A]

+-----+-----+-----+-----+-----+
|col_a|col_b|col_c|col_d|col_e|
+-----+-----+-----+-----+-----+
|    b|   11|   73|   64|    0|
|    a|   12|   16|    9|    0|
+-----+-----+-----+-----+-----+

Upvotes: 1

Related Questions