Reputation: 15258
I have a dataframe df:
+------+----------+--------------------+
|SiteID| LastRecID| Col_to_split|
+------+----------+--------------------+
| 2|1056962584|[214, 207, 206, 205]|
| 2|1056967423| [213, 208]|
| 2|1056870114| [213, 202, 199]|
| 2|1056876861|[203, 213, 212, 1...|
I want to split the column into lines like this:
+----------+-------------+-------------+
| RecID| index| Value|
+----------+-------------+-------------+
|1056962584| 0| 214|
|1056962584| 1| 207|
|1056962584| 2| 206|
|1056962584| 3| 205|
|1056967423| 0| 213|
|1056967423| 1| 208|
|1056870114| 0| 213|
|1056870114| 1| 202|
|1056870114| 2| 199|
|1056876861| 0| 203|
|1056876861| 1| 213|
|1056876861| 2| 212|
|1056876861| 3| 1..|
|1056876861| etc...| etc...|
Value contains the value from the list. Index contains the index of the value in the list.
How can I do that using PySpark ?
Upvotes: 0
Views: 2528
Reputation: 214967
As of Spark 2.1.0, you can use posexplode
which unnest array column and output the index for each element as well, (used data from @Herve):
import pyspark.sql.functions as F
df.select(
F.col("LastRecID").alias("RecID"),
F.posexplode(F.col("coltosplit")).alias("index", "value")
).show()
+-----+-----+-----+
|RecID|index|value|
+-----+-----+-----+
|10526| 0| 214|
|10526| 1| 207|
|10526| 2| 206|
|10526| 3| 205|
|10896| 0| 213|
|10896| 1| 208|
+-----+-----+-----+
Upvotes: 7
Reputation: 11
I quickly tried with Spark 2.0 You can change the query a little bit if you want to order differently.
d = [{'SiteID': '2', 'LastRecId': 10526, 'coltosplit': [214,207,206,205]}, {'SiteID': '2', 'LastRecId': 10896, 'coltosplit': [213,208]}]
df = spark.createDataFrame(d)
+---------+------+--------------------+
|LastRecId|SiteID| coltosplit|
+---------+------+--------------------+
| 10526| 2|[214, 207, 206, 205]|
| 10896| 2| [213, 208]|
+---------+------+--------------------+
query = """
select LastRecId as RecID,
(row_number() over (partition by LastRecId order by 1)) - 1 as index,
t as Value
from test
LATERAL VIEW explode(coltosplit) test AS t
"""
df.createTempView("test")
spark.sql(query).show()
+-----+-----+-----+
|RecID|index|Value|
+-----+-----+-----+
|10896| 0| 213|
|10896| 1| 208|
|10526| 0| 214|
|10526| 1| 207|
|10526| 2| 206|
|10526| 3| 205|
+-----+-----+-----+
So basically I just explode the list into a new column. And apply row number on this column.
Hope this helps
Upvotes: 1