Rohan Kumar
Rohan Kumar

Reputation: 408

how to convert list of json object into a single pyspark dataframe?

I'm new to pyspark, I have a list of jsons coming from an api, each json object has same schema(key-value pair). Like this

[ {'count': 308,
  'next': 'some_url',
  'previous': None,
  'results': [{'assigned_to': 43,
    'category': 'Unused',
    'comments': None,
    'completed_ts': None,
    'created': '2019-05-27T05:14:22.306843Z',
    'description': 'Pollution',
    'display_name': {'admin': False,
     'business_name': 'Test Business',
     'contact_number': 'some_number',
     'dob': None,
     'email': 'some_mail',
     'emp_id': None,
     'first_name': 'Alisha'}}]},
  {'count': 309,
  'next': 'some_url',
  'previous': None,
  'results': [{'assigned_to': 44,
    'category': 'Unused',
    'comments': None,
    'completed_ts': None,
    'created': '2019-05-27T05:14:22.306843Z',
    'description': 'Pollution',
    'display_name': {'admin': False,
     'business_name': 'Test Business',
     'contact_number': 'some_number',
     'dob': None,
     'email': 'some_mail',
     'emp_id': None,
     'first_name': 'Ali'}}]},......}]

if it would have been separate json files. I would have created dataframe using

df =spark.read.json('myfile.json') and then would have merged all dataframes into one. I'm facing issue in converting the datframe directly from list itself. I have used this

from pyspark.sql import SparkSession
spark= SparkSession.builder.appName("Basics").getOrCreate()
sc= spark.sparkContext
df = pyspark.sql.SQLContext(sc.parallelize(data_list))`

It gives me AttributeError: 'RDD' object has no attribute '_jsc'

Upvotes: 5

Views: 19554

Answers (1)

mayank agrawal
mayank agrawal

Reputation: 2545

I couldn't find a straight forward answer to your problem. But this solution works,

import json
import ast

df = sc.wholeTextFiles(path).map(lambda x:ast.literal_eval(x[1]))\
                            .map(lambda x: json.dumps(x))

df = spark.read.json(df)

This will give you output as,

+-----+--------+--------+--------------------+
|count|    next|previous|             results|
+-----+--------+--------+--------------------+
|  308|some_url|    null|[[43,Unused,null,...|
|  309|some_url|    null|[[44,Unused,null,...|
+-----+--------+--------+--------------------+

EDIT: If it is in a variable, all you have to do is,

import json

df = sc.parallelize(data).map(lambda x: json.dumps(x))
df = spark.read.json(df)

Upvotes: 8

Related Questions