Reputation: 709
I have a usecase where I need to convert the existing columns of a dataframe into JSON and store in only on one column.
So far I tried this:
import pandas as pd
import json
df=pd.DataFrame([{'a':'sjdfb','b':'jsfubs'},{'a':'ouhbsdv','b':'cm osdn'}]) #Random data
jsonresult1=df.to_json(orient='records')
# '[{"a":"sjdfb","b":"jsfubs"},{"a":"ouhbsdv","b":"cm osdn"}]'
But I want the data to be just string representation of the dictionary and not the list. So I tried this:
>>>jsonresult2=df.to_dict(orient='records')
>>>jsonresult2
# [{'a': 'sjdfb', 'b': 'jsfubs'}, {'a': 'ouhbsdv', 'b': 'cm osdn'}]
This is what I wanted the data to look like but when I try to make this a dataframe the dataframe again is in the format of 2 columns [a,b]. The string representation of these dictionary objects will insert into the dataframe the column data in the required format.
>>>for i in range(len(jsonresult2)):
... jsonresult3.append(str(jsonresult2[i]))
...
>>> jsonresult3
["{'a': 'sjdfb', 'b': 'jsfubs'}", "{'a': 'ouhbsdv', 'b': 'cm osdn'}"]
This is exactly what I wanted. And when I push this to a dataframe I get:
>>> df1
0
++++++++++++++++++++++++++++++++++++
0 | {'a': 'sjdfb', 'b': 'jsfubs'}
1 |{'a': 'ouhbsdv', 'b': 'cm osdn'}
But I feel this is a very inefficient way. How do I make it look and work in an optimized way? My data can exceed 10M rows. And this is taking too long.
Upvotes: 5
Views: 4131
Reputation: 294318
I'd first convert to a dictionary... Make into a series... then apply pd.json.dumps
pd.Series(df.to_dict('records'), df.index).apply(pd.json.dumps)
0 {"a":"sjdfb","b":"jsfubs"}
1 {"a":"ouhbsdv","b":"cm osdn"}
dtype: object
Or shorter code
df.apply(pd.json.dumps, 1)
0 {"a":"sjdfb","b":"jsfubs"}
1 {"a":"ouhbsdv","b":"cm osdn"}
dtype: object
We can improve performance by constructing the strings ourselves
v = df.values.tolist()
c = df.columns.values.tolist()
pd.Series([str(dict(zip(c, row))) for row in v], df.index)
0 {'a': 'sjdfb', 'b': 'jsfubs'}
1 {'a': 'ouhbsdv', 'b': 'cm osdn'}
dtype: object
If memory is an issue, I'd save df
to a csv and read it in line by line, constructing a new series or dataframe along the way.
df.to_csv('test.csv')
This is slower, but get's around some of the memory issues.
s = pd.Series()
with open('test.csv') as f:
c = f.readline().strip().split(',')[1:]
for row in f:
row = row.strip().split(',')
s.set_value(row[0], str(dict(zip(c, row[1:]))))
Or you can skip the file export if you can keep the df
in memory
s = pd.Series()
c = df.columns.values.tolist()
for t in df.itertuples():
s.set_value(t.Index, str(dict(zip(c, t[1:]))))
Upvotes: 6
Reputation: 19947
l = [{'a':'sjdfb','b':'jsfubs'},{'a':'ouhbsdv','b':'cm osdn'}]
#convert json elements to strings and then load to df.
pd.DataFrame([str(e) for e in l])
Out[949]:
0
0 {'a': 'sjdfb', 'b': 'jsfubs'}
1 {'a': 'ouhbsdv', 'b': 'cm osdn'}
Timings
%timeit pd.DataFrame([str(e) for e in l])
10000 loops, best of 3: 159 µs per loop
%timeit pd.Series(df.to_dict('records'), df.index).apply(pd.json.dumps)
1000 loops, best of 3: 254 µs per loop
Upvotes: 0