Reputation: 2859
I have a nested dict like this
d={ time1 : column1 : {data1,data2,data3}
column2 : {data1,data2,data3}
column3 : {data1,data2,data3} #So on.
time2 : {column1: } #Same as Above
}
data1,data2,data3 represent the type of data and not the data itself I need to put this dict into a file like this.
Timestamp col1/data1 Col1/data2 col1/data3 col2/data1 col2/data2 col2/data3 (So on...)
My problem is How do I ensure that the text goes under the corresponding column?
i.e Say I have put some text under time1 column14 and I again come across column14 in another timestamp. How do I keep track of the location of these columns in the text file?
The columns are just numbers (in string form)
Upvotes: 1
Views: 1018
Reputation: 391952
I would use JSON.
In Python 2.6 it's directly available, in earlier Python's you have to download and install it.
try:
import json
exception ImportError:
import simplejson as json
out= open( "myFile.json", "w" )
json.dump( { 'timestamp': time.time(), 'data': d }, indent=2 )
out.close()
Works nicely. Easy to edit manually. Easy to parse.
Upvotes: 3
Reputation: 23536
I would do it like this:
#get the row with maximum number of columns
maxrowlen = 0
maxrowkey = ""
for timesid in d.keys():
if len(timesid.keys()) > maxrowlen:
maxrowlen = len(timesid.keys())
maxrowkey = timesid
maxrowcols = sorted(d[maxrowkey].keys())
# prepare the writing
cell_format = "%10r" # or whatever suits your data
# create the output string
lines = []
for timesid in d.keys(): # go through all times
line = ""
for col in maxrowcols: # go through the standard columns
colstr = ""
if col in d[timesid].keys(): # create an entry for each standard column
colstr += cell_format % d[timesid][col] # either from actual data
else:
colstr += cell_format % "" # or blanks
line += colstr
lines.append(line)
text = "\n".join(lines)
Upvotes: 1