user422930
user422930

Reputation: 259

Filter nested JSON structure and get field names as values in Pyspark

I have the following complex data that would like to parse in PySpark:

records = '[{"segmentMembership":{"ups":{"FF6KCPTR6AQ0836R":{"lastQualificationTime":"2021-01-16 22:05:11.074357","status":"exited"},"QMS3YRT06JDEUM8O":{"lastQualificationTime":"2021-01-16 22:05:11.074357","status":"realized"},"8XH45RT87N6ZV4KQ":{"lastQualificationTime":"2021-01-16 22:05:11.074357","status":"exited"}}},"_aepgdcdevenablement2":{"emailId":{"address":"[email protected]"},"person":{"name":{"firstName":"Name2"}},"identities":{"customerid":"PH25PEUWOTA7QF93"}}},{"segmentMembership":{"ups":{"FF6KCPTR6AQ0836R":{"lastQualificationTime":"2021-01-16 22:05:11.074457","status":"realized"},"D45TOO8ZUH0B7GY7":{"lastQualificationTime":"2021-01-16 22:05:11.074457","status":"realized"},"QMS3YRT06JDEUM8O":{"lastQualificationTime":"2021-01-16 22:05:11.074457","status":"existing"}}},"_aepgdcdevenablement2":{"emailId":{"address":"[email protected]"},"person":{"name":{"firstName":"TestName"}},"identities":{"customerid":"9LAIHVG91GCREE3Z"}}}]'
df = spark.read.json(sc.parallelize([records]))
df.show()
df.printSchema()

The problem I am having is with the segmentMembership object. The JSON object looks like this:

"segmentMembership": {
      "ups": {
        "FF6KCPTR6AQ0836R": {
          "lastQualificationTime": "2021-01-16 22:05:11.074357",
          "status": "exited"
        },
        "QMS3YRT06JDEUM8O": {
          "lastQualificationTime": "2021-01-16 22:05:11.074357",
          "status": "realized"
        },
        "8XH45RT87N6ZV4KQ": {
          "lastQualificationTime": "2021-01-16 22:05:11.074357",
          "status": "exited"
        }
      }
    }

The annoying thing with this is, the key values ("FF6KCPTR6AQ0836R", "QMS3YRT06JDEUM8O", "8XH45RT87N6ZV4KQ") end up being defined as a column in pyspark.

In the end, if the status of the segment is "exited", I was hoping to get the results as follows.

+--------------------+----------------+---------+------------------+
|address             |customerid      |firstName|segment_id        |
+--------------------+----------------+---------+------------------+
|[email protected] |PH25PEUWOTA7QF93|Name2    |[8XH45RT87N6ZV4KQ]|
|[email protected]|9LAIHVG91GCREE3Z|TestName |[8XH45RT87N6ZV4KQ]|
+--------------------+----------------+---------+------------------+

After loading the data into a dataframe(above), I tried the following:

dfx = df.select("_aepgdcdevenablement2.emailId.address", "_aepgdcdevenablement2.identities.customerid", "_aepgdcdevenablement2.person.name.firstName", "segmentMembership.ups")
dfx.show(truncate=False)

seg_list = array(*[lit(k) for k in ["8XH45RT87N6ZV4KQ", "QMS3YRT06JDEUM8O"]])
print(seg_list)

# if v["status"] in ['existing', 'realized']

def confusing_compare(ups, seg_list):
    seg_id_filtered_d = dict((k, ups[k]) for k in seg_list if k in ups)

    # This is the line I am having a problem with.
    # seg_id_status_filtered_d = {key for key, value in seg_id_filtered_d.items() if v["status"] in ['existing', 'realized']}
       
    return list(seg_id_filtered_d)

final_conf_dx_pred = udf(confusing_compare, ArrayType(StringType()))
result_df = dfx.withColumn("segment_id", final_conf_dx_pred(dfx.ups, seg_list)).select("address", "customerid", "firstName", "segment_id")

result_df.show(truncate=False)

I am not able to check the status field within the value field of the dic.

Upvotes: 2

Views: 4318

Answers (1)

blackbishop
blackbishop

Reputation: 32680

You can actually do that without using UDF. Here I'm using all the segment names present in the schema and filtering out those with status = 'exited'. You can adapt it depending on which segments and status you want.

First, using the schema fields, get the list of all segment names like this:

segment_names = df.select("segmentMembership.ups.*").schema.fieldNames()

Then, by looping throught the list created above and using when function, you can create a column that can have either segment_name as value or null depending on status:

active_segments = [
        when(col(f"segmentMembership.ups.{c}.status") != lit("exited"), lit(c)) 
        for c in segment_names
]

Finally, add new column segments of array type and use filter function to remove null elements from the array (which corresponds to status 'exited'):

dfx = df.withColumn("segments", array(*active_segments)) \
        .withColumn("segments", expr("filter(segments, x -> x is not null)")) \
        .select(
        col("_aepgdcdevenablement2.emailId.address"),
        col("_aepgdcdevenablement2.identities.customerid"),
        col("_aepgdcdevenablement2.person.name.firstName"),
        col("segments").alias("segment_id")
    )

dfx.show(truncate=False)

#+--------------------+----------------+---------+------------------------------------------------------+
#|address             |customerid      |firstName|segment_id                                            |
#+--------------------+----------------+---------+------------------------------------------------------+
#|[email protected] |PH25PEUWOTA7QF93|Name2    |[QMS3YRT06JDEUM8O]                                    |
#|[email protected]|9LAIHVG91GCREE3Z|TestName |[D45TOO8ZUH0B7GY7, FF6KCPTR6AQ0836R, QMS3YRT06JDEUM8O]|
#+--------------------+----------------+---------+------------------------------------------------------+

Upvotes: 2

Related Questions