Usman Khan
Usman Khan

Reputation: 147

How to Subtract Each row in spark data frames from every other row in pyspark?

I have a spark dataframe with 3 columns that indicate positions of atoms i-e Position X, Y & Z. Now to find the distance between every 2 atoms for which I need to apply distance formula. The distance formula is d= sqrt((x2−x1)^2+(y2−y1)^2+(z2-z1)^2)

so to apply the above formula I need to subtract every row in x from every other row in x, every row in y from every other row in y and so as. and then apply the above formula for every two atoms.

I have tried to make a User-defined function(udf), but I am unable to pass the whole spark dataframe to it, I can only pass each column separately not the whole dataframe. Due to which I couldn't iterate over the whole dataframe rather I have to apply for loops on each column. The below piece of code show the iteration I am doing for Position_X only.

@udf
def Distance(Position_X,Position_Y, Position_Z):
    try:
       for x,z in enumerate(Position_X) :
           firstAtom = z
           for y, a in enumerate(Position_X):
                if (x!=y):
                    diff = firstAtom - a
           return diff
    except:
        return None

newDF1 = atomsDF.withColumn("Distance", Distance(*atomsDF.columns))

My atomDF spark dataframe look like this, each row shows the x,y,z coordinates of one atom in space. Right now we are taking only 10 atoms.

Position_X|Position_Y|Position_Z|
+----------+----------+----------+
|    27.545|     6.743|    12.111|
|    27.708|     7.543|    13.332|
|    27.640|     9.039|    12.970|
|    26.991|     9.793|    13.693|
|    29.016|     7.166|    14.106|
|    29.286|     8.104|    15.273|
|    28.977|     5.725|    14.603|
|    28.267|     9.456|    11.844|
|    28.290|    10.849|    11.372|
|    26.869|    11.393|    11.161|
+----------+----------+----------+

How can I solve the above problem in pyspark i-e. How to subtract each row from every other row? How to pass a whole spark dataframe to udf not its columns? And how to avoid using soo many for loops?

The expected output for every two atoms (rows) would be a distance between two rows calculated with the above distance formula. I don't need to retain that distance because I will be using it another formula of Potential energy. Or if it can be retained in a separate dataframe I don't mind.

Upvotes: 0

Views: 2383

Answers (1)

Steven
Steven

Reputation: 15258

I you want to do compare 2 by 2 the atoms (the lines) you need to perform a cross join ... which is not recommended.

You can use the function monotonically_increasing_id to generate an id for each line.

from pyspark.sql import functions as F
df = df.withColumn("id", F.monotonically_increasing_id())

Then you crossJoin your dataframe with itself and you filter with line where "id_1 > id_2"

df_1 = df.select(*(F.col(col).alias("{}_1".format(col)) for col in df.columns))
df_2 = df.select(*(F.col(col).alias("{}_2".format(col)) for col in df.columns))
df_3 = df_1.crossJoin(df_2).where("id_1 > id_2")

df_3 contains the 45 lines you need. You just have to apply your formula :

df_4 = df_3.withColumn(
    "distance",
    F.sqrt(
        F.pow(F.col("Position_X_1") - F.col("Position_X_2"), F.lit(2))
        + F.pow(F.col("Position_Y_1") - F.col("Position_Y_2"), F.lit(2))
        + F.pow(F.col("Position_Z_1") - F.col("Position_Z_2"), F.lit(2))
    )
)


df_4.orderBy('id_2', 'id_1').show()
+------------+------------+------------+----------+------------+------------+------------+----+------------------+
|Position_X_1|Position_Y_1|Position_Z_1|      id_1|Position_X_2|Position_Y_2|Position_Z_2|id_2|          distance|
+------------+------------+------------+----------+------------+------------+------------+----+------------------+
|      27.708|       7.543|      13.332|         1|      27.545|       6.743|      12.111|   0|1.4688124454810418|
|       27.64|       9.039|       12.97|         2|      27.545|       6.743|      12.111|   0| 2.453267616873462|
|      26.991|       9.793|      13.693|         3|      27.545|       6.743|      12.111|   0| 3.480249991020759|
|      29.016|       7.166|      14.106|         4|      27.545|       6.743|      12.111|   0|2.5145168522004355|
|      29.286|       8.104|      15.273|8589934592|      27.545|       6.743|      12.111|   0|3.8576736513085175|
|      28.977|       5.725|      14.603|8589934593|      27.545|       6.743|      12.111|   0| 3.049100195139542|
|      28.267|       9.456|      11.844|8589934594|      27.545|       6.743|      12.111|   0|2.8200960976534106|
|       28.29|      10.849|      11.372|8589934595|      27.545|       6.743|      12.111|   0| 4.237969089080287|
|      26.869|      11.393|      11.161|8589934596|      27.545|       6.743|      12.111|   0| 4.793952023122468|
|       27.64|       9.039|       12.97|         2|      27.708|       7.543|      13.332|   1|1.5406764747993003|
|      26.991|       9.793|      13.693|         3|      27.708|       7.543|      13.332|   1|2.3889139791964036|
|      29.016|       7.166|      14.106|         4|      27.708|       7.543|      13.332|   1|1.5659083625806454|
|      29.286|       8.104|      15.273|8589934592|      27.708|       7.543|      13.332|   1|2.5636470115833037|
|      28.977|       5.725|      14.603|8589934593|      27.708|       7.543|      13.332|   1|2.5555676473143896|
|      28.267|       9.456|      11.844|8589934594|      27.708|       7.543|      13.332|   1|  2.48720606303539|
|       28.29|      10.849|      11.372|8589934595|      27.708|       7.543|      13.332|   1|  3.88715319996524|
|      26.869|      11.393|      11.161|8589934596|      27.708|       7.543|      13.332|   1| 4.498851186691999|
|      26.991|       9.793|      13.693|         3|       27.64|       9.039|       12.97|   2|1.2298154333069653|
|      29.016|       7.166|      14.106|         4|       27.64|       9.039|       12.97|   2|2.5868902180030737|
|      29.286|       8.104|      15.273|8589934592|       27.64|       9.039|       12.97|   2|2.9811658793163454|
+------------+------------+------------+----------+------------+------------+------------+----+------------------+
only showing top 20 rows

It is working for few data but with a lot, the crossJoin will destroy the performances.

Upvotes: 2

Related Questions