Turgeon28
Turgeon28

Reputation: 53

How to predict kmeans cluster with Spark org.apache.spark.ml.clustering.{KMeans, KMeansModel}

i have a problem with the two different MLLIB Implementations (org.apache.spark.ml. and org.apache.spark.mllib) and KMeans. I am using the new implementation of org.apache.spark.ml which is using Dataframes but I'm struggeling with the documentation and how to predict a cluster index.

import org.apache.spark.ml.clustering.{KMeans, KMeansModel}
import org.apache.spark.ml.feature.LabeledPoint
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.sql.{Row, SparkSession}

/**
  * An example showcasing the use of kMeans
  */
object ExploreKMeans {

  // Spark configuration.
  // Retrieve sparkContext with spark.sparkContext.
  private val spark = SparkSession.builder()
    .appName("com.example.ml.exploration.kMeans")
    .master("local[*]")
    .getOrCreate()

  // This import, after the definition of a valid SQLContext defines implicits for converting RDDs to Dataframes over .toDF().
  import spark.implicits._

  def main(args: Array[String]): Unit = {

    val data = spark.sparkContext.parallelize(Array((5.0, 2.0,1.5), (2.0, 2.5,2.3), (1.0, 2.1,4.2), (2.0, 5.5, 8.5)))

    val df = data.toDF().map { row =>
      val label = row(0).asInstanceOf[Double]
      val value1 = row(1).asInstanceOf[Double]
      val value2 = row(2).asInstanceOf[Double]

      LabeledPoint(label, Vectors.dense(value1,value2))
    }


    val kmeans = new KMeans().setK(3).setSeed(1L)
    val model: KMeansModel = kmeans.fit(df)

    // Evaluate clustering by computing Within Set Sum of Squared Errors.
    val WSSSE = model.computeCost(df)
    println(s"Within Set Sum of Squared Errors = $WSSSE")

    // Shows the result.
    println("Cluster Centers: ")
    model.clusterCenters.foreach(println)

    //TODO How to predict cluster index?
    //model.predict(???
  }
}

How do I use the model to predict the cluster index of new values? The model.predict function is not visible. This API is really confusing...

Upvotes: 1

Views: 3056

Answers (2)

Vincent_lrc
Vincent_lrc

Reputation: 21

well, a easier way to do this is:

model.summary.predictions.show

Upvotes: 2

Turgeon28
Turgeon28

Reputation: 53

Ok, I got it. Predictions are now done with the transform method:

  println("Transform ")
val transformed =  model.transform(df)
transformed.collect().foreach(println)

Cluster Centers: 
[2.25,1.9]
[5.5,8.5]
[2.1,4.2]
Transform: 
[5.0,[2.0,1.5],0]
[2.0,[2.5,2.3],0]
[1.0,[2.1,4.2],2]
[2.0,[5.5,8.5],1]

Upvotes: 1

Related Questions