merours
merours

Reputation: 4106

How to compute the mean with Apache spark?

I dispose of a list of Double stored like this :

JavaRDD<Double> myDoubles

I would like to compute the mean of this list. According to the documentation, :

All of MLlib’s methods use Java-friendly types, so you can import and call them there the same way you do in Scala. The only caveat is that the methods take Scala RDD objects, while the Spark Java API uses a separate JavaRDD class. You can convert a Java RDD to a Scala one by calling .rdd() on your JavaRDD object.

On the same page, I see the following code :

val MSE = valuesAndPreds.map{case(v, p) => math.pow((v - p), 2)}.mean()

From my understanding, this is equivalent (in term of types) to

Double MSE = RDD<Double>.mean()

As a consequence, I tried to compute the mean of my JavaRDD like this :

myDoubles.rdd().mean()

However, it doesn't work and gives me the following eror : The method mean() is undefined for the type RDD<Double>. I also didn't find mention of this function in the RDD scala documentation. . Is this because of a bad understanding of my side, or is this something else ?

Upvotes: 6

Views: 12602

Answers (2)

merours
merours

Reputation: 4106

It's actually quite simple: mean() is defined for the JavaDoubleRDD class. I didn't find how to cast from JavaRDD<Double> to JavaDoubleRDD, but in my case, it was not necessary.

Indeed, this line in scala

val mean = valuesAndPreds.map{case(v, p) => (v - p)}.mean()

can be expressed in Java as

double mean = valuesAndPreds.mapToDouble(tuple -> tuple._1 - tuple._2).mean();

Upvotes: 10

Viliam Simko
Viliam Simko

Reputation: 1841

Don't forget to add import org.apache.spark.SparkContext._ at the top of your scala file. Also make sure you are calling mean() on RDD[Double]

Upvotes: 0

Related Questions