Reputation: 3
I am trying to use python 3.x comprehension to create a nested dictionary structure. My comprehension syntax works, but it is very slow, especially with a large data set. I have also created my desired data structure using loops and it runs much faster, but I would like to know if there is a way to improve this comprehension to make it more efficient and potentially run as fast as, or faster than my loop code.
My input data is a list of dictionaries, each dictionary outlining the specifics of an amateur radio contact (log entry). Here is a random subset of my data (limited to 20 entries, and non-essential keys in the dictionary removed to make this more clear)
[{'BAND': '20M',
'CALL': 'AA9GL',
'COUNTRY': 'UNITED STATES OF AMERICA',
'QSO_DATE': '20170528',
'TIME_ON': '132100'},
{'BAND': '20M',
'CALL': 'KE4BFI',
'COUNTRY': 'UNITED STATES OF AMERICA',
'QSO_DATE': '20150704',
'TIME_ON': '034600'},
{'BAND': '20M',
'CALL': 'W8OTR',
'COUNTRY': 'UNITED STATES OF AMERICA',
'QSO_DATE': '20190119',
'TIME_ON': '194645'},
{'BAND': '10M',
'CALL': 'FY5FY',
'COUNTRY': 'FRENCH GUIANA',
'QSO_DATE': '20150328',
'TIME_ON': '161953'},
{'BAND': '17M',
'CALL': 'KD5FOY',
'COUNTRY': 'UNITED STATES OF AMERICA',
'QSO_DATE': '20190121',
'TIME_ON': '145630'},
{'BAND': '10M',
'CALL': 'K5GQ',
'COUNTRY': 'UNITED STATES OF AMERICA',
'QSO_DATE': '20150110',
'TIME_ON': '195326'},
{'BAND': '10M',
'CALL': 'CR5L',
'COUNTRY': 'PORTUGAL',
'QSO_DATE': '20151025',
'TIME_ON': '182351'},
{'BAND': '20M',
'CALL': 'AD4TR',
'COUNTRY': 'UNITED STATES OF AMERICA',
'QSO_DATE': '20170325',
'TIME_ON': '144606'},
{'BAND': '40M',
'CALL': 'EA8FJ',
'COUNTRY': 'CANARY ISLANDS',
'QSO_DATE': '20170618',
'TIME_ON': '020300'},
{'BAND': '10M',
'CALL': 'PY2DPM',
'COUNTRY': 'BRAZIL',
'QSO_DATE': '20150104',
'TIME_ON': '205900'},
{'BAND': '17M',
'CALL': 'MM0HVU',
'COUNTRY': 'SCOTLAND',
'QSO_DATE': '20170416',
'TIME_ON': '130200'},
{'BAND': '10M',
'CALL': 'LW3DG',
'COUNTRY': 'ARGENTINA',
'QSO_DATE': '20161029',
'TIME_ON': '210629'},
{'BAND': '10M',
'CALL': 'LW3DG',
'COUNTRY': 'ARGENTINA',
'QSO_DATE': '20151025',
'TIME_ON': '210714'},
{'BAND': '20M',
'CALL': 'EI7HDB',
'COUNTRY': 'IRELAND',
'QSO_DATE': '20170423',
'TIME_ON': '184000'},
{'BAND': '20M',
'CALL': 'KM0NAS',
'COUNTRY': 'UNITED STATES OF AMERICA',
'QSO_DATE': '20180102',
'TIME_ON': '142151'},
{'BAND': '10M',
'CALL': 'PY2TKB',
'COUNTRY': 'BRAZIL',
'QSO_DATE': '20150328',
'TIME_ON': '223535'},
{'BAND': '40M',
'CALL': 'EB1DJ',
'COUNTRY': 'SPAIN',
'QSO_DATE': '20170326',
'TIME_ON': '232430'},
{'BAND': '40M',
'CALL': 'LU6PCK',
'COUNTRY': 'ARGENTINA',
'QSO_DATE': '20150615',
'TIME_ON': '000200'},
{'BAND': '17M',
'CALL': 'G3RKF',
'COUNTRY': 'ENGLAND',
'QSO_DATE': '20190121',
'TIME_ON': '144315'},
{'BAND': '20M',
'CALL': 'UA1ZKI',
'COUNTRY': 'EUROPEAN RUSSIA',
'QSO_DATE': '20170508',
'TIME_ON': '141400'}]
I want to create a dictionary where each key is a band (10M, 20M, etc) and the value will be a dictionary listing the counties contacted on that band as keys and a count of contacts for each country on that band as the values. Here is what my output looks like:
{'10M': {'ARGENTINA': 2,
'BRAZIL': 2,
'FRENCH GUIANA': 1,
'PORTUGAL': 1,
'UNITED STATES OF AMERICA': 1},
'17M': {'ENGLAND': 1, 'SCOTLAND': 1, 'UNITED STATES OF AMERICA': 1},
'20M': {'EUROPEAN RUSSIA': 1, 'IRELAND': 1, 'UNITED STATES OF AMERICA': 5},
'40M': {'ARGENTINA': 1, 'CANARY ISLANDS': 1, 'SPAIN': 1}}
This is the comprehension that I came up with to create the output. It works, and with the limited data set shown here, it runs quickly, but with an input list of a couple thousand entries, it takes quite a long time to run.
worked_dxcc_by_band = {
z["BAND"]: {
x["COUNTRY"]: len([y["COUNTRY"]
for y in log_entries
if y["COUNTRY"] == x["COUNTRY"] and y["BAND"] == z["BAND"]])
for x in log_entries
if x["BAND"] == z["BAND"]
}
for z in log_entries
}
Because this is a triple-nested comprehension, and all 3 loops run through the entire log_entries list, I am assuming that is why it gets very slow.
Is there a more efficient way to accomplish this with comprehension? I am fine using my loop to process the data but I am trying to enhance my skills regarding comprehensions so I thought this would be a good exercise!
This is what I am doing without using comprehension: I have a function analyize_log_entry which I call as I load each log entry in from a file.
from collections import Counter
worked_dxcc_by_band = {}
def analyze_log_entry(entry):
if "BAND" in entry:
if "COUNTRY" in entry:
if entry["BAND"] in worked_dxcc_by_band:
worked_dxcc_by_band[entry["BAND"]][entry["COUNTRY"]] += 1
else:
worked_dxcc_by_band[entry["BAND"]] = Counter()
worked_dxcc_by_band[entry["BAND"]][entry["COUNTRY"]] = 1
This in itself may not be that efficient but my full code has many similar blocks within the analyze_log_entry function that build multiple dictionaries. Because I am only going through all of my data once, and building the dictionaries where appropriate, it is probably much more efficient than using comprehension, which is essentially multiple loops. As I said, this is more of an exercise to learn how to accomplish the same task with different methods.
Upvotes: 0
Views: 132
Reputation: 195408
EDIT: Dictionary comprehension version:
out = {band: dict(Counter(v['COUNTRY'] for v in g)) for band, g in groupby(sorted(data, key=lambda k: k['BAND']), lambda k: k['BAND'])}
You can combine itertools.groupby
and collections.Counter
:
from itertools import groupby
from collections import Counter
s = sorted(data, key=lambda k: k['BAND'])
out = {}
for band, g in groupby(s, lambda k: k['BAND']):
c = Counter(v['COUNTRY'] for v in g)
out[band] = dict(c)
from pprint import pprint
pprint(out)
Prints:
{'10M': {'ARGENTINA': 2,
'BRAZIL': 2,
'FRENCH GUIANA': 1,
'PORTUGAL': 1,
'UNITED STATES OF AMERICA': 1},
'17M': {'ENGLAND': 1, 'SCOTLAND': 1, 'UNITED STATES OF AMERICA': 1},
'20M': {'EUROPEAN RUSSIA': 1, 'IRELAND': 1, 'UNITED STATES OF AMERICA': 5},
'40M': {'ARGENTINA': 1, 'CANARY ISLANDS': 1, 'SPAIN': 1}}
EDIT: Without modules:
out = {}
for i in data:
out.setdefault(i['BAND'], {}).setdefault(i['COUNTRY'], 0)
out[i['BAND']][i['COUNTRY']] += 1
from pprint import pprint
pprint(out)
Benchmark:
from timeit import timeit
from itertools import groupby
from collections import Counter
def sol_orig():
worked_dxcc_by_band = {z["BAND"]: {x["COUNTRY"] : len([y["COUNTRY"] for y in data if y["COUNTRY"] == x["COUNTRY"] and y["BAND"] == z["BAND"]]) for x in data if x["BAND"] == z["BAND"]} for z in data}
return worked_dxcc_by_band
def solution():
out = {band: dict(Counter(v['COUNTRY'] for v in g)) for band, g in groupby(sorted(data, key=lambda k: k['BAND']), lambda k: k['BAND'])}
return out
def solution_2():
out = {}
for i in data:
out.setdefault(i['BAND'], {}).setdefault(i['COUNTRY'], 0)
out[i['BAND']][i['COUNTRY']] += 1
return out
t1 = timeit(lambda: solution(), number=10000)
t2 = timeit(lambda: solution_2(), number=10000)
t3 = timeit(lambda: sol_orig(), number=10000)
print(t1)
print(t2)
print(t3)
Prints:
0.18113317096140236
0.08159565401729196
3.5367472909856588
Upvotes: 3