Patterson
Patterson

Reputation: 2823

Apache Spark TypeError: Object of type DataFrame is not JSON serializable

I'm sending JSON data from Apache Spark / Databricks to an API. The API is expecting the data in the following JSON format:

Sample:
{
  "CtcID": 1,
  "LastName": "sample string 2",
  "CpyID": 3,
  "HonTitle": "sample string 4",
  "PositionCode": 1,
  "PositionFreeText": "sample string 6",
  "CreateDate": "2021-04-21T08:50:56.8602571+01:00",
  "ModifDate": "2021-04-21T08:50:56.8602571+01:00",
  "ModifiedBy": 1,
  "SourceID": "sample string 9",
  "OriginID": "sample string 10",
  "DoNotExport": true,
  "ParentEmailAddress": "sample string 13",
  "SupInfo": [
    {
      "FieldName": "sample string 1",
      "DATA_TYPE": "sample string 2",
      "IS_NULLABLE": "sample string 3",
      "FieldContent": "sample string 4"
    },
    {
      "FieldName": "sample string 1",
      "DATA_TYPE": "sample string 2",
      "IS_NULLABLE": "sample string 3",
      "FieldContent": "sample string 4"
    }
  ],

I'm sending the data in the following JSON format:

{"Last_name":"Finnigan","First_name":"Michael","Email":"[email protected]"}
{"Last_name":"Phillips","First_name":"Austin","Email":"[email protected]"}
{"Last_name":"Collins","First_name":"Colin","Email":"[email protected]"}
{"Last_name":"Finnigan","First_name":"Judy","Email":"[email protected]"}
{"Last_name":"Jones","First_name":"Julie","Email":"[email protected]"}
{"Last_name":"Smith","First_name":"Barry","Email":"[email protected]"}
{"Last_name":"Kane","First_name":"Harry","Email":"[email protected]"}
{"Last_name":"Smith","First_name":"John","Email":"[email protected]"}
{"Last_name":"Colins","First_name":"Ruby","Email":"[email protected]"}
{"Last_name":"Tests","First_name":"Smoke","Email":"[email protected]"}

The code in Apache Spark is as follows:

url = 'https://enimuozygj4jqx.m.pipedream.net'
files = spark.read.json("abfss://azurestorageaccount.dfs.core.windows.net/PostContact.json")

r = requests.post(url, data=json.dumps(files))
print(r.status_code)

When I execute the code I get the following error:

TypeError: Object of type DataFrame is not JSON serializable

Upvotes: 0

Views: 1629

Answers (1)

Alex Ott
Alex Ott

Reputation: 87349

Dataframe is a set of Row objects, and you can't do json.dumps on it. You can do something like this:

from pyspark.sql.functions import * 

files_df = spark.read.json("...")
rows = files_df.select(to_json(struct('*')).alias("json")).collect()
files = [json.loads(row[0]) for row in rows]
r = requests.post(url, data=json.dumps(files))

this code takes the Dataframe and converts every row into a struct (using the struct function) (like a Python dict) and then convert that object into JSON string (via to_json). And then we're converting that into the Python list of objects for which you can use json.dumps

Upvotes: 1

Related Questions