Reputation: 1270
Let's say I have a dataframe like this:
+-----------+-----------+-----------+-----------+------------+--+
| ColA | ColB | ColC | ColD | ColE | |
+-----------+-----------+-----------+-----------+------------+--+
| '' | sample_1x | sample_1y | '' | sample_1z | |
| sample2_x | sample2_y | '' | '' | '' | |
| sample3_x | '' | '' | '' | sample3_y | |
| sample4_x | sample4_y | '' | sample4_z | sample4_zz | |
| sample5_x | '' | '' | '' | '' | |
+-----------+-----------+-----------+-----------+------------+--+
I want to create another dataframe that shows the relationship from left to right in each row, while skipping columns that have empty values. Also rows that only have 1 valid columnar record will be excluded. For example:
+-----------+------------+-----------+
| From | To | Label |
+-----------+------------+-----------+
| sample1_x | sample1_y | ColB_ColC |
| sample1_y | sample1_z | ColC_ColE |
| sample2_x | sample2_y | ColA_ColB |
| sample3_x | sample3_y | ColA_ColE |
| sample4_x | sample4_y | ColA_ColB |
| sample4_y | sample4_z | ColB_ColD |
| sample4_z | sample4_zz | ColD_ColE |
+-----------+------------+-----------+
I'm thinking the approach would be to write a UDF that contains this logic but I'm not entirely sure how I would return a completely new DF, as I'm used to UDFs just creating another column within the same DF. Or if there's another spark function that can handle this case easier than creating a UDF? Using pyspark if that matters.
Upvotes: 3
Views: 667
Reputation: 14008
You can use udf which takes an array argument and returns an array of structs, for example:
from pyspark.sql import functions as F
df.show()
+---------+---------+---------+---------+----------+
| ColA| ColB| ColC| ColD| ColE|
+---------+---------+---------+---------+----------+
| null|sample_1x|sample_1y| null| sample_1z|
|sample2_x|sample2_y| null| null| null|
|sample3_x| null| null| null| sample3_y|
|sample4_x|sample4_y| null|sample4_z|sample4_zz|
|sample5_x| null| null| null| null|
+---------+---------+---------+---------+----------+
# columns that get involved, will group them into an array using F.array(cols)
cols = df.columns
# defind function to convert array into array of structs
def find_route(arr, cols):
d = [ (cols[i],e) for i,e in enumerate(arr) if e is not None ]
return [ {'From':d[i][1], 'To':d[i+1][1], 'Label':d[i][0]+'_'+d[i+1][0]} for i in range(len(d)-1) ]
# set up the UDF and add cols as an extra argument
udf_find_route = F.udf(lambda a: find_route(a, cols), 'array<struct<From:string,To:string,Label:string>>')
# retrive the data from the array of structs after array-explode
df.select(F.explode(udf_find_route(F.array(cols))).alias('c1')).select('c1.*').show()
+---------+----------+---------+
| From| To| Label|
+---------+----------+---------+
|sample_1x| sample_1y|ColB_ColC|
|sample_1y| sample_1z|ColC_ColE|
|sample2_x| sample2_y|ColA_ColB|
|sample3_x| sample3_y|ColA_ColE|
|sample4_x| sample4_y|ColA_ColB|
|sample4_y| sample4_z|ColB_ColD|
|sample4_z|sample4_zz|ColD_ColE|
+---------+----------+---------+
Upvotes: 2
Reputation: 2441
Using mainly Spark SQL:
df.createOrReplaceTempView("df")
cols_df = df.columns
qry = " union ".join([f"""
select {enum_cols[1]} as From,
{cols_df[enum_cols[0] + 1]} as To,
'{enum_cols[1]}{cols_df[enum_cols[0] + 1]}' as Label from df where {enum_cols[1]} <> '' and {cols_df[enum_cols[0] + 1]} <> ''"""
for enum_cols in enumerate(cols_df) if enum_cols[0] < len(cols_df) - 1])
final_df = spark.sql(qry)
Upvotes: 0