blue-sky
blue-sky

Reputation: 53786

How to flatten a collection with Spark/Scala?

In Scala I can flatten a collection using :

val array = Array(List("1,2,3").iterator,List("1,4,5").iterator)
                                                  //> array  : Array[Iterator[String]] = Array(non-empty iterator, non-empty itera
                                                  //| tor)


    array.toList.flatten                      //> res0: List[String] = List(1,2,3, 1,4,5)

But how can I perform similar in Spark ?

Reading the API doc http://spark.apache.org/docs/0.7.3/api/core/index.html#spark.RDD there does not seem to be a method which provides this functionality ?

Upvotes: 21

Views: 38922

Answers (2)

samthebest
samthebest

Reputation: 31515

Use flatMap and the identity Predef, this is more readable than using x => x, e.g.

myRdd.flatMap(identity)

Upvotes: 37

Josh Rosen
Josh Rosen

Reputation: 13801

Try flatMap with an identity map function (y => y):

scala> val x = sc.parallelize(List(List("a"), List("b"), List("c", "d")))
x: org.apache.spark.rdd.RDD[List[String]] = ParallelCollectionRDD[1] at parallelize at <console>:12

scala> x.collect()
res0: Array[List[String]] = Array(List(a), List(b), List(c, d))

scala> x.flatMap(y => y)
res3: org.apache.spark.rdd.RDD[String] = FlatMappedRDD[3] at flatMap at <console>:15

scala> x.flatMap(y => y).collect()
res4: Array[String] = Array(a, b, c, d)

Upvotes: 31

Related Questions