Sekhar
Sekhar

Reputation: 165

How to add a Map column to Spark dataset?

I have a Java Map variable, say Map<String, String> singleColMap. I want to add this Map variable to a dataset as a new column value in Spark 2.2 (Java 1.8).

I tried the below code but it is not working:

ds.withColumn("cMap", lit(singleColMap).cast(MapType(StringType, StringType)))

Can some one help on this?

Upvotes: 1

Views: 2513

Answers (2)

Helder Pereira
Helder Pereira

Reputation: 5756

This can easily be solved in Scala with typedLit, but I couldn't find a way to make that method to work in Java, because it requires a TypeTag which I don't think it's even possible to create in Java.

However, I managed to mostly emulate in Java what typedLit does, bar the type inference part, so I need to set the Spark type explicitly:

public static Column typedMap(Map<String, String> map) {
    return new Column(Literal.create(JavaConverters.mapAsScalaMapConverter(map).asScala(), createMapType(StringType, StringType)));
}

Then it can be used like this:

ds.withColumn("cMap", typedMap(singleColMap))

Upvotes: 0

Shaido
Shaido

Reputation: 28392

You can use typedLit that was introducted in Spark 2.2.0, from the documentation:

The difference between this function and lit is that this function can handle parameterized scala types e.g.: List, Seq and Map.

So in this case, the following should be enough

ds.withColumn("cMap", typedLit(singleColMap))

Upvotes: 1

Related Questions