WarBoy
WarBoy

Reputation: 176

Read each json object as single row in Dataframe using Pyspark?

I have the below JSON file

{"name":"John", "age":31, "city":"New York"}
{"name":"Henry", "age":41, "city":"Boston"}
{"name":"Dave", "age":26, "city":"New York"}

So from this I need to read each json line as a single row along with the Dataframe.

Below is the expected output:

enter image description here

I've tried with the below code:

from pyspark.sql import SparkSession

spark = SparkSession \
    .builder \
    .appName('Read Json') \
    .getOrCreate()

df = spark.read.format('json').load('sample_json')
df.show()

But I'm only able to get the below output:

enter image description here

Please help me in this. Thanks in advance.

Upvotes: 0

Views: 2263

Answers (1)

notNull
notNull

Reputation: 31490

Read the file as json then Use to_json function to create json_column.

1.Using to_json function:

from pyspark.sql.functions import *    
spark.read.json("sample.json").\
withColumn("Json_column",to_json(struct(col("age"),col('city'),col('name')))).\
show(10,False)
#+---+--------+-----+------------------------------------------+
#|age|city    |name |Json_column                               |
#+---+--------+-----+------------------------------------------+
#|31 |New York|John |{"age":31,"city":"New York","name":"John"}|
#|41 |Boston  |Henry|{"age":41,"city":"Boston","name":"Henry"} |
#|26 |New York|Dave |{"age":26,"city":"New York","name":"Dave"}|
#+---+--------+-----+------------------------------------------+

#or more dynamic way
df=spark.read.json("sample.json")
df.withColumn("Json_column",to_json(struct([col(c) for c in df.columns]))).show(10,False)
#+---+--------+-----+------------------------------------------+
#|age|city    |name |Json_column                               |
#+---+--------+-----+------------------------------------------+
#|31 |New York|John |{"age":31,"city":"New York","name":"John"}|
#|41 |Boston  |Henry|{"age":41,"city":"Boston","name":"Henry"} |
#|26 |New York|Dave |{"age":26,"city":"New York","name":"Dave"}|
#+---+--------+-----+------------------------------------------+

2.Other approach using get_json_object function:

Read the json file as text then create name,age,city columns by extracting from json object.

from pyspark.sql.functions import *
spark.read.text("sample.json").\
withColumn("name",get_json_object(col("value"),"$.name")).\
withColumn("city",get_json_object(col("value"),"$.city")).\
withColumn("age",get_json_object(col("value"),"$.age")).\
withColumnRenamed("value","Json_column").\
select("age","city","name","Json_column").\
show(10,False)
#+---+--------+-----+--------------------------------------------+
#|age|city    |name |Json_column                                 |
#+---+--------+-----+--------------------------------------------+
#|31 |New York|John |{"name":"John", "age":31, "city":"New York"}|
#|41 |Boston  |Henry|{"name":"Henry", "age":41, "city":"Boston"} |
#|26 |New York|Dave |{"name":"Dave", "age":26, "city":"New York"}|
#+---+--------+-----+--------------------------------------------+

Upvotes: 1

Related Questions