Nir Ben Yaacov
Nir Ben Yaacov

Reputation: 1182

write a spark Dataset to json with all keys in the schema, including null columns

I am writing a dataset to json using:

ds.coalesce(1).write.format("json").option("nullValue",null).save("project/src/test/resources")

For records that have columns with null values, the json document does not write that key at all.

Is there a way to enforce null value keys to the json output?

This is needed since I use this json to read it onto another dataset (in a test case) and cannot enforce a schema if some documents do not have all the keys in the case class (I am reading it by putting the json file under resources folder and transforming to a dataset via RDD[String], as explained here: https://databaseline.bitbucket.io/a-quickie-on-reading-json-resource-files-in-apache-spark/)

Upvotes: 6

Views: 8912

Answers (2)

decodering
decodering

Reputation: 184

Since Pyspark 3, one can use the ignoreNullFields option when writing to a JSON file.

spark_dataframe.write.json(output_path,ignoreNullFields=False)

Pyspark docs: https://spark.apache.org/docs/3.1.1/api/python/_modules/pyspark/sql/readwriter.html#DataFrameWriter.json

Upvotes: 0

Glennie Helles Sindholt
Glennie Helles Sindholt

Reputation: 13154

I agree with @philantrovert.

ds.na.fill("")
  .coalesce(1)
  .write
  .format("json")
  .save("project/src/test/resources")

Since DataSets are immutable you are not altering the data in ds and you can process it (complete with null values and all) in any following code. You are simply replacing null values with an empty string in the saved file.

Upvotes: 5

Related Questions