Reputation: 65
I hit a snag earlier, trying to do some transformations within Spark Dataframes.
Let's say I have a dataframe of schema :
root
|-- coordinates: array (nullable = true)
| |-- element: double (containsNull = true)
|-- userid: string (nullable = true)
|-- pubuid: string (nullable = true)
I would like to get rid of the array(double) in coordinates, and instead get a DF with row that look like
"coordinates(0),coordinates(1)", userid, pubuid
or something like
coordinates(0), coordinates(1), userid, pubuid .
With Scala I could do
coordinates.mkString(",")
but in DataFrames coordinates resolves to a java.util.List.
So far I worked around the issue, by reading into an RDD, transforming then building a new DF. But I was wondering if there's a more elegant way to do that with Dataframes.
Thanks for your help.
Upvotes: 0
Views: 3138
Reputation: 330353
You can use an UDF:
import org.apache.spark.sql.functions.{udf, lit}
val mkString = udf((a: Seq[Double]) => a.mkString(", "))
df.withColumn("coordinates_string", mkString($"coordinates"))
or
val apply = udf((a: Seq[Double], i: Int) => a(i))
df.select(
$"*",
apply($"coordinates", lit(0)).alias("x"),
apply($"coordinates", lit(1)).alias("y")
)
Edit:
In the recent versions you can also use concat_ws
:
import org.apache.spark.sql.functions.concat_ws
df.withColumn(
"coordinates_string", concat_ws(",", $"coordinates")
)
or simple Column.apply
:
df.select($"*", $"coordinates"(0).alias("x"), $"coordinates"(1).alias("y"))
Upvotes: 3