Phong Pham
Phong Pham

Reputation: 63

Customize Distance Formular of K-means in Apache Spark Python

Now I'm using K-means for clustering and following this tutorial and API.

But I want to use custom formula for calculate distances. So how can I pass custom distance functions in k-means with PySpark?

Upvotes: 4

Views: 5897

Answers (1)

zero323
zero323

Reputation: 330193

In general using a different distance measure doesn't make sense, because k-means (unlike k-medoids) algorithm is well defined only for Euclidean distances.

See Why does k-means clustering algorithm use only Euclidean distance metric? for an explanation.

Moreover MLlib algorithms are implemented in Scala, and PySpark provides only the wrappers required to execute Scala code. Therefore providing a custom metric as a Python function, wouldn't be technically possible without significant changes in the API.

Please note that since Spark 2.4 there are two built-in measures that can be used with pyspark.ml.clustering.KMeans and pyspark.ml.clustering.BisectingKMeans. (see DistanceMeasure Param).

  • euclidean for Euclidean distance.
  • cosine for cosine distance.

Use at your own risk.

Upvotes: 6

Related Questions