Rukshan Hassim
Rukshan Hassim

Reputation: 515

Scala spark aggregate a set of columns in a data frame to a JSON string

Given a data frame,

+-----------------------------+
| id|  name| payable| strategy|
+-----------------------------+
|  0|   Joe|     100|     st-1|
|  1|   Tom|     200|     st-2|
|  2|  John|     300|     st-1|
+-----------------------------+

What would be the most efficient way to convert each row to a JSON string such as follows,

{
  "payload": {
     "name": "Joe",
     "payments": [
         {
            "strategy": "st-1",
            "payable": 100
         }
     ]
  }
}

Currently I have UDF to manually stringify the provided columns, but I'm wondering whether there is a better way to achieve this. The to_json method is the best alternative I found so far but that takes only one column as an input.

Upvotes: 0

Views: 88

Answers (1)

vdep
vdep

Reputation: 3590

Using to_json() is the correct approach, but the contents need to be passed as struct as appropriate:

val df = Seq((0,"Joe",100,"st-1"), (1,"Tom",200,"st-2")).toDF("id","name","payable","strategy")

val result = df.select(
  to_json(struct(
    struct($"name",
      array(struct($"strategy",$"payable")) as "payments"
    ) as "payload")
  ) as "jsonValue"
 )

result.show(false)
+-------------------------------------------------------------------------+
|jsonValue                                                                |
+-------------------------------------------------------------------------+
|{"payload":{"name":"Joe","payments":[{"strategy":"st-1","payable":100}]}}|
|{"payload":{"name":"Tom","payments":[{"strategy":"st-2","payable":200}]}}|
+-------------------------------------------------------------------------+

Upvotes: 3

Related Questions