Hanan Atallah
Hanan Atallah

Reputation: 120

Convert dataframe to hash-map using Spark Scala

My data-frame looks like:

+-------------------+-------------+
|        Nationality|    continent|
+-------------------+-------------+
|       Turkmenistan|         Asia|
|         Azerbaijan|         Asia|
|             Canada|North America|
|         Luxembourg|       Europe|
|             Gambia|       Africa|

My output should look like this:

Map(Gibraltar -> Europe, Haiti -> North America)

So, I'm trying to convert the data-frame into

scala.collection.mutable.Map[String, String]()

I'm trying with following code:

    var encoder = Encoders.product[(String, String)]
    val countryToContinent = scala.collection.mutable.Map[String, String]()
    var mapped = nationalityDF.mapPartitions((it) => {
        ....
        ....
        countryToContinent.toIterator
    })(encoder).toDF("Nationality", "continent").as[(String, String)](encoder)

    val map = mapped.rdd.groupByKey.collect.toMap

But the result map has following output:

Map(Gibraltar -> CompactBuffer(Europe), Haiti -> CompactBuffer(North America))

How I can get the hash-map result without CompactBuffer?

Upvotes: 2

Views: 932

Answers (1)

abiratsis
abiratsis

Reputation: 7336

Let's create some data:

val df = Seq(
("Turkmenistan", "Asia"), 
("Azerbaijan", "Asia"))
.toDF("Country", "Continent")

Try to map into a tuple first then collect into a map:

df.map{ r => (r.getString(0), r.getString(1))}.collect.toMap

Output:

scala.collection.immutable.Map[String,String] = Map(Turkmenistan -> Asia, Azerbaijan -> Asia)

Upvotes: 2

Related Questions