Thomas
Thomas

Reputation: 1865

Relationalize json nested array

I have the following catalog and want to use AWS glue to flatten it?

| accountId | resourceId | items                                                           |
|-----------|------------|-----------------------------------------------------------------|
| 1         | r1         | [{name: "tool", version: "1.0"}, {name: "app", version: "1.0"}] |
| 1         | r2         | [{name: "tool", version: "2.0"}, {name: "app", version: "2.0"}] |
| 2         | r3         | [{name: "tool", version: "3.0"}, {name: "app", version: "3.0"}] |

I want to flatten it to following:

| accountId | resourceId | name | version |
|-----------|------------|------|---------|
| 1         | r1         | tool | 1.0     |
| 1         | r1         | app  | 1.0     |
| 1         | r2         | tool | 2.0     |
| 1         | r2         | app  | 2.0     |
| 2         | r3         | tool | 3.0     |
| 2         | r3         | app  | 3.0     |

Relationalize.apply can only flatten the nested items, it can not bring the accountId and resourceId to the result, is there a way to solve this?

Upvotes: 2

Views: 1067

Answers (1)

blackbishop
blackbishop

Reputation: 32680

In Pyspark, if the structure of the array element was valid JSON like this :

{"name": "tool", "version": "1.0"}

You could have used explode + from_json to parse it to struct.

But here you need to do some cleansing. One way is using str_to_map function after you explode the items column to get a map column. Then explode it again and pivot to get map keys as columns.

df = spark.createDataFrame([
    (1, "r1", ['{name: "tool", version: "1.0"}', '{name: "app", version: "1.0"}']),
    (1, "r2", ['{name: "tool", version: "2.0"}', '{name: "app", version: "2.0"}']),
    (2, "r3", ['{name: "tool", version: "3.0"}', '{name: "app", version: "3.0"}'])
], ["accountId", "resourceId", "items"])

# remove leading and trailing {} and convert to map
sql_expr = "str_to_map(trim(BOTH '{}' FROM items), ',', ':')"

df.withColumn("items", explode(col("items"))) \
  .select(col("*"), explode(expr(sql_expr))) \
  .groupBy("accountId", "resourceId", "items") \
  .pivot("key") \
  .agg(first(expr("trim(BOTH '\"' FROM trim(value))"))) \
  .drop("items")\
  .show()

#+---------+----------+--------+----+
#|accountId|resourceId| version|name|
#+---------+----------+--------+----+
#|        1|        r1|     1.0| app|
#|        1|        r2|     2.0| app|
#|        2|        r3|     3.0|tool|
#|        2|        r3|     3.0| app|
#|        1|        r2|     2.0|tool|
#|        1|        r1|     1.0|tool|
#+---------+----------+--------+----+

Another simple way, if you know all the keys, is to use regexp_extract to extract the values from the string:

df.withColumn("items", explode(col("items"))) \
  .withColumn("name", regexp_extract("items", "name: \"(.+?)\"[,}]", 1)) \
  .withColumn("version", regexp_extract("items", "version: \"(.+?)\"[,}]", 1)) \
  .drop("items") \
  .show() 

Upvotes: 2

Related Questions