lazycoder
lazycoder

Reputation: 81

Convert an Array column to Array of Structs in PySpark dataframe

I have a Dataframe containing 3 columns

| str1      | array_of_str1        | array_of_str2  |
+-----------+----------------------+----------------+
| John      | [Size, Color]        | [M, Black]     |
| Tom       | [Size, Color]        | [L, White]     |
| Matteo    | [Size, Color]        | [M, Red]       |

I want to add the Array column that contains the 3 columns in a struct type

| str1      | array_of_str1        | array_of_str2  | concat_result                                 |
+-----------+----------------------+----------------+-----------------------------------------------+
| John      | [Size, Color]        | [M, Black]     | [[[John, Size , M], [John, Color, Black]]]    |
| Tom       | [Size, Color]        | [L, White]     | [[[Tom, Size , L], [Tom, Color, White]]]      |
| Matteo    | [Size, Color]        | [M, Red]       | [[[Matteo, Size , M], [Matteo, Color, Red]]]  |

Upvotes: 4

Views: 15182

Answers (2)

Oli
Oli

Reputation: 10406

If the number of elements in the arrays in fixed, it is quite straightforward using the array and struct functions. Here is a bit of code in scala.

val result = df
    .withColumn("concat_result", array((0 to 1).map(i => struct(
                     col("str1"),
                     col("array_of_str1").getItem(i),
                     col("array_of_str2").getItem(i)
    )) : _*))

And in python, since you were asking about pyspark:

import pyspark.sql.functions as F

df.withColumn("concat_result", F.array(*[ F.struct(
                  F.col("str1"),
                  F.col("array_of_str1").getItem(i),
                  F.col("array_of_str2").getItem(i))
              for i in range(2)]))

And you get the following schema:

root
 |-- str1: string (nullable = true)
 |-- array_of_str1: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- array_of_str2: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- concat_result: array (nullable = false)
 |    |-- element: struct (containsNull = false)
 |    |    |-- str1: string (nullable = true)
 |    |    |-- col2: string (nullable = true)
 |    |    |-- col3: string (nullable = true)

Upvotes: 9

Kafels
Kafels

Reputation: 4059

Spark >= 2.4.x

For dynamically values you can use high-order functions:

import pyspark.sql.functions as f

expr = "TRANSFORM(arrays_zip(array_of_str1, array_of_str2), x -> struct(str1, concat(x.array_of_str1), concat(x.array_of_str2)))"
df = df.withColumn('concat_result', f.expr(expr))

df.show(truncate=False)

Schema and output:

root
 |-- array_of_str1: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- array_of_str2: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- str1: string (nullable = true)
 |-- concat_result: array (nullable = true)
 |    |-- element: struct (containsNull = false)
 |    |    |-- str1: string (nullable = true)
 |    |    |-- col2: string (nullable = true)
 |    |    |-- col3: string (nullable = true)

+-------------+-------------+------+-----------------------------------------+
|array_of_str1|array_of_str2|str1  |concat_result                            |
+-------------+-------------+------+-----------------------------------------+
|[Size, Color]|[M, Black]   |John  |[[John, Size, M], [John, Color, Black]]  |
|[Size, Color]|[L, White]   |Tom   |[[Tom, Size, L], [Tom, Color, White]]    |
|[Size, Color]|[M, Red]     |Matteo|[[Matteo, Size, M], [Matteo, Color, Red]]|
+-------------+-------------+------+-----------------------------------------+

Upvotes: 0

Related Questions