Reputation: 2659
This works (df : dataframe)
val filteredRdd = df.rdd.zipWithIndex.collect { case (r, i) if i >= 10 => r }
This doesn't
val start=10
val filteredRdd = df.rdd.zipWithIndex.collect { case (r, i) if i >= start => r }
I tried using broadcast variables , but even that didn't work
val start=sc.broadcast(1)
val filteredRdd = df.rdd.zipWithIndex.collect { case (r, i) if i >= start.value => r }
I am getting Task Not Serializable exception. Can anyone explain why it fails even with broadcast variables.
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2055)
at org.apache.spark.rdd.RDD$$anonfun$collect$2.apply(RDD.scala:959)
at org.apache.spark.rdd.RDD$$anonfun$collect$2.apply(RDD.scala:958)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:958)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$$$$$fa17825793f04f8d2edd8765c45e2a6c$$$$wC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:172)
at $iwC
Upvotes: 2
Views: 307
Reputation: 63022
The basic constructs you are using look solid. Here is a similar code snippet that does work. Note it uses broadcast
and uses the broadcast value inside the map
method - similarly to your code.
scala> val dat = sc.parallelize(List(1,2,3))
dat: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:24
scala> val br = sc.broadcast(10)
br: org.apache.spark.broadcast.Broadcast[Int] = Broadcast(2)
scala> dat.map(br.value * _)
res2: org.apache.spark.rdd.RDD[Long] = MapPartitionsRDD[1] at map at <console>:29
scala> res2.collect
res3: Array[Int] = Array(10, 20, 30)
So this may help you as a verification of your general approach.
I suspect your problem were with other variables in your script. Try stripping everything out first in a new spark-shell session and find out the culprit by process of elimination.
Upvotes: 1