Reputation: 26752
I use the following function to chunk iterable python objects.
from itertools import islice
def chunked_iterable(iterable, chunk_size):
it = iter(iterable)
while True:
chunk = tuple(islice(it, chunk_size))
if not chunk:
break
yield chunk
I'm looking to do something similar with a basic JSON file.
[
{object1: 'object1'},
{object2: 'object2'},
{object3: 'object3'},
{object4: 'object4'},
{object5: 'object5'},
{object6: 'object6'},
etc...
]
Like this.
from pathlib import Path
import json
def json_chunk(json_array_of_objects, object_count):
# What goes here?
if __name__ == '__main__':
with open(Path(__file__).parent / 'raw_data.json') as raw_data:
json_data = json.load(raw_data)
for json_array_with_five_objects in enumerate(json_chunk(json_data, 5)):
for object in json_array_with_five_objects:
print(object[0])
Is the term I'm looking for "streaming" JSON data?
How do you stream JSON data?
As a learning exercise I'm trying to stick with base python functionality for now but answers using other packages are helpful too.
Upvotes: 0
Views: 3880
Reputation: 123501
After further thought, using object_hook
or object_pairs_hook
arguments would require reading the entire file into memory first—so to avoid doing that, instead here's something that reads the file incrementally, line-by-line.
I had to modify your example JSON file to make it valid JSON (what you have in your question is a Python dictionary). Note that this code is format-specific in the sense that it assumes each JSON object in the array lies entirely on a single line—although it could be changed to handle multiline object definitions if necessary.
So here's a sample test input file with valid JSON contents:
[
{"thing1": "object1"},
{"thing2": "object2"},
{"thing3": "object3"},
{"thing4": "object4"},
{"thing5": "object5"},
{"thing6": "object6"}
]
Code:
from itertools import zip_longest
import json
from pathlib import Path
def grouper(n, iterable, fillvalue=None):
""" s -> (s0, s1...sn-1), (sn, sn+1...s2n-1), (s2n, s2n+1...s3n-1), ... """
return zip_longest(*[iter(iterable)]*n, fillvalue=fillvalue)
def read_json_objects(fp):
""" Read objects from file containing an array of JSON objects. """
next(fp) # Skip first line.
for line in (line.strip() for line in fp):
if line[0] == ']': # Last line?
break
yield json.loads(line.rstrip(','))
def json_chunk(json_file_path, object_count):
with open(json_file_path) as fp:
for group in grouper(object_count, read_json_objects(fp)):
yield(tuple(obj for obj in group if obj is not None))
if __name__ == '__main__':
json_file_path = Path(__file__).parent / 'raw_data.json'
for array in json_chunk(json_file_path, 5):
print(array)
Output from processing test file:
({'thing1': 'object1'}, {'thing2': 'object2'}, {'thing3': 'object3'}, {'thing4': 'object4'}, {'thing5': 'object5'})
({'thing6': 'object6'},)
Upvotes: 2
Reputation: 52
JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language. -https://www.json.org/
JSON is a string of text. You would need to convert it back to python to be iteratable
Upvotes: 0