plalanne
plalanne

Reputation: 1030

Create column with distance to center

I am running a Kmeans algorithm with pyspark. The input is a Vector of length 20 (output of a word2vec on text verbatims). I then transform my input dataframe to get the predicted center associated to each verbatim.

from pyspark.ml.clustering import KMeans

n_centres = 14
kmeans = KMeans().setK(n_centres).setSeed(1)
model = kmeans.fit(df)
df_pred = model.transform(df)

I have the following results :

df_pred.show()

+--------------------+----------+
|            features|prediction|
+--------------------+----------+
|[-0.1879145856946...|        13|
|[-0.4428333640098...|         6|
|[0.00466226078569...|         9|
|[0.09467326601346...|        12|
|[-0.0388545106080...|         5|
|[-0.1805213503539...|        13|
|[0.08455141757925...|         3|
+--------------------+----------+

I would like to add a column to my dataframe containing the distance between the features array and the center to which it is associated. I know I can get the coordinates of the center, I know how to compute the distance between a vector and the center :

model.clusterCenters()[3] # to get the coordinates of cluster number 3
v1.squared_distance(center_vect) # euclidean distance between v1 and the center center_vect

But I can't figure out how to add the result of this computation as a column. A udf or a map seems to be a solution but I keep getting errors like : PicklingError: Could not serialize object....

Upvotes: 1

Views: 1106

Answers (1)

bendl
bendl

Reputation: 1630

You're correct to assume you need to use a UDF. Here's an example of how this will work in a similar context:

>>> import random
>>> from pyspark.sql.functions import udf
>>> centers = {1: 2, 2: 3, 3: 4, 4:5, 5:6}
>>> choices = [1, 2, 3, 4,5]
>>> l = [(random.random(), random.choice(choices)) for i in range(10)]
>>> df = spark.createDataFrame(df, ['features', 'prediction'])
>>> df.show()
+-------------------+----------+
|           features|prediction|
+-------------------+----------+
| 0.4836744206538728|         3|
|0.38698675915124414|         4|
|0.18612684714681604|         3|
| 0.5056159922655895|         1|
| 0.7825023909896331|         4|
|0.49933715239708243|         5|
| 0.6673811293962939|         4|
| 0.7010166164833609|         3|
| 0.6867109795526414|         5|
|0.21975859257732422|         3|
+-------------------+----------+
>>> dist = udf(lambda features, prediction: features - centers[prediction])
>>> df.withColumn('dist', dist(df.features, df.prediction)).show()
+-------------------+----------+-------------------+
|           features|prediction|               dist|
+-------------------+----------+-------------------+
| 0.4836744206538728|         3| -3.516325579346127|
|0.38698675915124414|         4| -4.613013240848756|
|0.18612684714681604|         3| -3.813873152853184|
| 0.5056159922655895|         1|-1.4943840077344106|
| 0.7825023909896331|         4| -4.217497609010367|
|0.49933715239708243|         5| -5.500662847602918|
| 0.6673811293962939|         4|-4.3326188706037065|
| 0.7010166164833609|         3| -3.298983383516639|
| 0.6867109795526414|         5| -5.313289020447359|
|0.21975859257732422|         3| -3.780241407422676|
+-------------------+----------+-------------------+

You can alter the line where I create the UDF to something like the following:

dist = udf(lambda features, prediction: features.squared_distance(model.clusterCenters()[prediction]))

Since I don't have the actual data to work with I'm hoping that's correct!

Upvotes: 2

Related Questions