haaduken
haaduken

Reputation: 664

Spark 2 Coalesce Multiple Columns at once

I'm trying to upsert one dataframe into another.

scala> addressOrigRenamed.show
+--------------+----------------------+-----------+-----------+
|orig_contactid|orig_contactaddresskey|orig_valueA|orig_valueB|
+--------------+----------------------+-----------+-----------+
|             1|                     1|         54|          3|
|             1|                     2|         55|          7|
+--------------+----------------------+-----------+-----------+
scala> dfNew.show
+---------+-----------------+------+------+
|contactId|contactaddresskey|valueA|valueB|
+---------+-----------------+------+------+
|        1|                2|    10|     9|
+---------+-----------------+------+------+
scala> val endDF = addressOrigRenamed.join(dfNew, $"orig_contactid" === $"contactid" && $"orig_contactaddresskey" === "$contactaddresskey", "fullouter").select(coalesce($"contactid", $"orig_contactid").alias("contactid"), coalesce($"contactaddresskey", $"orig_contactaddresskey").alias("contactaddresskey"), coalesce($"valueA", $"orig_valueA").alias("valueA"), coalesce($"valueB", $"orig_valueB").alias("valueB"))
scala> endDF.show
+---------+-----------------+------+------+
|contactid|contactaddresskey|valueA|valueB|
+---------+-----------------+------+------+
|        1|                1|    54|     3|
|        1|                2|    10|     9|
+---------+-----------------+------+------+

As you can see, this works. But the syntax is horrible. This is only a test, and I'll need to coalesce 15-20 columns. Writing coalesce(....).alias(...) 15-20 is really a terrible option. How can I write this better?

Upvotes: 1

Views: 2621

Answers (1)

haaduken
haaduken

Reputation: 664

It is possible to create an array of coalesce functions:

scala> val joinedDF = addressOrigRenamed.join(dfNew, $"orig_contactid" === $"contactid" && $"orig_contactaddresskey" === "$contactaddresskey", "fullouter")
scala> val arr = dfNew.columns.map(x => {
         val y = "orig_" + x
         coalesce(joinedDF.col(x), joinedDF.col(y)).alias(x)
      })

And then you can select using this arr, keeping in mind to spread the arr's elements:

scala> joinedDF.select(arr:_*).show 
+---------+-----------------+------+------+
|contactId|contactaddresskey|valueA|valueB|
+---------+-----------------+------+------+
|        1|                1|    54|     3|
|        1|                2|    10|     9|
+---------+-----------------+------+------+

Upvotes: 1

Related Questions