Adas Kavaliauskas
Adas Kavaliauskas

Reputation: 75

Creating dictionary column out of two groups of columns on PySpark Dataframe

I have dataframe which has two groups of columns info.name and info.value:

    id      |info.name.1|info.name.2|info.name.3|info.value.1|info.value.2|info.value.3|
    -------------------------------------------------------------------------------------
    1       |amount     |currency   |action     |10          |USD         |add         |
    2       |amount     |currency   |action     |100         |EUR         |transfer    |
    3       |amount     |currency   |action     |2000        |GBP         |add         |

My target is to collect them into name:value pairs and create single column with info dictionary in it:

    id      |info                                              |
    -----------------------------------------------------------|
    1       |{amount : 10, currency : USD, action: add}        |
    2       |{amount : 100, currency : EUR, action: transfer}  |
    3       |{amount : 2000, currency : GBP, action: add}      |

Appreciate your advice and help.

Thank you.

Upvotes: 1

Views: 658

Answers (1)

raul ferreira
raul ferreira

Reputation: 916

Here's a possible solution.

Let's create some data to work with:

data = [
    ('A', 'B', 10, 100),
    ('C', 'D', 12, 20),
    ('A', 'D', 30, 0)
]

schema = T.StructType([
    T.StructField('KEY_1', T.StringType()),
    T.StructField('KEY_2', T.StringType()),
    T.StructField('VAL_1', T.IntegerType()),
    T.StructField('VAL_2', T.IntegerType())
])

df = spark.createDataFrame(data, schema)

df.show()

+-----+-----+-----+-----+
|KEY_1|KEY_2|VAL_1|VAL_2|
+-----+-----+-----+-----+
|    A|    B|   10|  100|
|    C|    D|   12|   20|
|    A|    D|   30|    0|
+-----+-----+-----+-----+

And here is the logic to describe the transformation you intend to do:

import pyspark.sql.functions as F

from itertools import groupby
from functools import reduce
from pyspark.sql import DataFrame


fields = [f.name for f in df.schema.fields]
fsort = lambda x: x.split('_')[1]

grouped = groupby(sorted(fields, key=fsort), key=fsort)


dfs = [
    df.select(F.create_map(F.col(key), F.col(value)).alias('map_values'))
    for group, (key, value) in grouped
]

df = reduce(DataFrame.union, dfs)

df.show()

+----------+
|map_values|
+----------+
| [A -> 10]|
| [C -> 12]|
| [A -> 30]|
|[B -> 100]|
| [D -> 20]|
|  [D -> 0]|
+----------+

Upvotes: 1

Related Questions