Reputation: 97
I have large json data that is read into a python dataframe, and creates a list of dicts per row. I need to convert it into a different format of data.
The data format is the following:
{
"data": [{
"item": [{
"value": 0,
"type": "a"
},
{
"value": 0,
"type": "b"
},
{
"value": 70,
"type": "c"
}
],
"timestamp": "2019-01-12T04:52:06.669Z"
},
{
"item": [{
"value": 30,
"type": "a"
},
{
"value": 0,
"type": "b"
}
],
"timestamp": "2019-01-12T04:53:06.669z"
}
]
}
What would be the most efficient way of converting the data to a dataframe of the form:
timestamp-----------------------------a-------b------c
2019-01-12T04:52:06.669Z------0-------0------70
2019-01-12T04:53:06.669Z------30------0------0
So far I have managed to do it using for loops, but its very inefficient and slow. What I have so far is this.
with open('try.json') as f:
data = json.load(f)
df_data = pandas.DataFrame(data['data'])
df_formatted = pandas.DataFrame(columns=['a','b','c'])
for d, timestamp in zip(df_data['item'], df_data['timestamp']):
row = dict()
for entry in d:
category = entry['type']
value = entry['value']
row[category] = value
row['timestamp'] = timestamp
df_formatted = df_formatted.append(row, ignore_index=True)
df = df_formatted.fillna(0)
The number of items in the list is often in several thousands.Any pointers or examples about how to do this efficienty?
Upvotes: 0
Views: 928
Reputation: 148910
You can extract a list of dictionaries from the json and feed it into a dataframe. Code could be:
df = pd.DataFrame([dict([('timestamp', d['timestamp']), ('a', 0),
('b', 0), ('c', 0)]
+ [(item['type'], item['value'])
for item in d['item']])for d in data['data']],
columns=['timestamp', 'a', 'b', 'c'])
print(df)
outputs as expected:
timestamp a b c
0 2019-01-12T04:52:06.669Z 0 0 70
1 2019-01-12T04:53:06.669z 30 0 0
The trick here is to first build a list of pairs with default values and then extend it with the actual values before building a dict from it. As the last seen values is kept, you actually build a dictionnary containing all relevant values.
The columns parameter is only present to ensure the expected order of columns.
Upvotes: 0
Reputation: 8273
You can unpack the nested json object by iterating over the objects. Try
import pandas as pd
a=[
{
"item": [
{
"value": 0,
"type": "a"
},
{
"value": 0,
"type": "b"
},
{
"value": 70,
"type": "c"
},
],
"timestamp": "2019-01-12T04:52:06.669Z"
},
{
"item": [
{
"value": 30,
"type": "a"
},
{
"value": 0,
"type": "b"
}
],
"timestamp": "2019-01-12T04:53:06.669z"
}
]
cols = ['value', 'type', 'timestamp']
rows = []
for data in a:
data_row = data['item']
timestamp = data['timestamp']
for row in data_row:
row['timestamp']=timestamp
rows.append(row)
df = pd.DataFrame(rows)
df =df.pivot_table(index='timestamp',columns=['type'],values=['value']).reset_index()
df.columns=['timestamp','a','b','c']
If you are looking for a compact solution use json_normalize
from pandas.io.json import json_normalize
df =pd.DataFrame()
for i in range(len(a)):
df =pd.concat([df,json_normalize(a[i]['item'])])
df =df.pivot_table(index='timestamp',columns=['type'],values=['value']).reset_index()
df.columns=['timestamp','a','b','c']
Final output
timestamp a b c
2019-01-12T04:52:06.669Z 0.0 0.0 70.0
2019-01-12T04:53:06.669z 30.0 0.0 NaN
Upvotes: 3