Achraf Oussidi
Achraf Oussidi

Reputation: 109

Pyspark agg function to "explode" rows into columns

Basically, I have a dataframe that looks like this:

+----+-------+------+------+
| id | index | col1 | col2 |
+----+-------+------+------+
| 1  | a     | a11  | a12  |
+----+-------+------+------+
| 1  | b     | b11  | b12  |
+----+-------+------+------+
| 2  | a     | a21  | a22  |
+----+-------+------+------+
| 2  | b     | b21  | b22  |
+----+-------+------+------+

and my desired output is this:

+----+--------+--------+--------+--------+
| id | col1_a | col1_b | col2_a | col2_b |
+----+--------+--------+--------+--------+
| 1  | a11    | b11    | a12    | b12    |
+----+--------+--------+--------+--------+
| 2  | a21    | b21    | a22    | b22    |
+----+--------+--------+--------+--------+

So basically I want to "explode" the index column into new columns after I groupby id. Btw, the id counts are the same and each id has the same set of index values. I'm using pyspark.

Upvotes: 0

Views: 728

Answers (1)

Mahesh Gupta
Mahesh Gupta

Reputation: 1892

using pivot you can achieve the desired output.

from pyspark.sql import functions as F
df = spark.createDataFrame([[1,"a","a11","a12"],[1,"b","b11","b12"],[2,"a","a21","a22"],[2,"b","b21","b22"]],["id","index","col1","col2"])
df.show()
+---+-----+----+----+                                                           
| id|index|col1|col2|
+---+-----+----+----+
|  1|    a| a11| a12|
|  1|    b| b11| b12|
|  2|    a| a21| a22|
|  2|    b| b21| b22|
+---+-----+----+----+

using pivot

 df3 =df.groupBy("id").pivot("index").agg(F.first(F.col("col1")),F.first(F.col("col2")))

collist=["id","col1_a","col2_a","col1_b","col2_b"]

Rename Column

df3.toDF(*collist).show()
+---+------+------+------+------+
| id|col1_a|col2_a|col1_b|col2_b|
+---+------+------+------+------+
|  1|   a11|   a12|   b11|   b12|
|  2|   a21|   a22|   b21|   b22|
+---+------+------+------+------+

Note rearrange column based on your requirement.

Upvotes: 2

Related Questions