Reputation: 71
My problem is as follows:
I have a txt-file that holds nothing but a dictionary with one single key. The value to that one, single key is a huge list containing dictionaries as list entries. First key:value pair for comparison:
"data": [{"type": "utl", "id": "53150", "attributes": {"timestamp": "T13:00:00Z", "count": 0.0}}, [...etc.]
I tried the following method to convert the value of the single-keyed dictionary into a list by calling the .values method and then using list():
list_variable = list(dict_variable.values())
But it seems that this just converts the value into a list with just one index, for when I try to call index 0 the file crashes (list is too big) and if I try to call index 1 I get a KeyError stating that the index is out of range. (My current idea is to frist convert it into a list and THEN into a DataFrame) I'm a bloody beginner and have no idea what else I could try. What am I missing? Thanks a lot in advance! fpr your helpful comments!
Upvotes: 1
Views: 61
Reputation: 1342
Does the below codes help you?
test.txt
"data": [{"type": "utl", "id": "53150", "attributes": {"timestamp": "T13:00:00Z", "count": 0.0}}, {"type": "utl2", "id": "53151", "attributes": {"timestamp": "T12:00:00Z", "count": 1.0}}]
from re import findall
from pandas.io.json import json_normalize
with open("test.txt") as f:
print(json_normalize(eval(findall("{.+}", f.read())[0])))
Output:
type id attributes.timestamp attributes.count
0 utl 53150 T13:00:00Z 0.0
1 utl2 53151 T12:00:00Z 1.0
Upvotes: 0
Reputation: 14103
looks like a json to me. try using pandas.json_normalize
d = {"data": [{"type": "utl", "id": "53150", "attributes": {"timestamp": "T13:00:00Z", "count": 0.0}}]}
pd.json_normalize(d['data'])
type id attributes.timestamp attributes.count
0 utl 53150 T13:00:00Z 0.0
Upvotes: 2