user2200660
user2200660

Reputation: 1271

Remove duplicate keys from Spark Scala

I am using spark 1.2 with scala and have a pair RDD with (String, String). A sample record looks like:

<Key,  value>
id_1,  val_1_1; val_1_2
id_2,  val_2_1; val_2_2
id_3,  val_3_1; val_3_2
id_1,  val_4_1; val_4_2

I just want to remove all the records with duplicate key, so in the above example, fourth record will be removed because id_1 is a duplicate key.

Pls help.

Thanks.

Upvotes: 1

Views: 8638

Answers (2)

mattinbits
mattinbits

Reputation: 10428

If it necessary to select always the first entry for a given key, then, combining @JeanLogeart answer with the comment from @Paul,

import org.apache.spark.{SparkContext, SparkConf}

val data = List(
  ("id_1", "val_1_1; val_1_2"),
  ("id_2",  "val_2_1; val_2_2"),
  ("id_3",  "val_3_1; val_3_2"),
  ("id_1",  "val_4_1; val_4_2") )

val conf = new SparkConf().setMaster("local").setAppName("App")
val sc = new SparkContext(conf)
val dataRDD = sc.parallelize(data)
val resultRDD = dataRDD.zipWithIndex.map{
  case ((key, value), index) => (key, (value, index))
}.reduceByKey((v1,v2) => if(v1._2 < v2._2) v1 else v2).mapValues(_._1)
resultRDD.collect().foreach(v => println(v))
sc.stop()

Upvotes: 1

Jean Logeart
Jean Logeart

Reputation: 53819

You can use reduceByKey:

val rdd: RDD[(K, V)] = // ...
val res: RDD[(K, V)] = rdd.reduceByKey((v1, v2) => v1)

Upvotes: 11

Related Questions