Reputation: 17322
I'm trying to figure out what is the best way to build a new dict from a list of dicts, each dict form the list has 2 common keys, one common key value will be the key for the new dict and the other common key value will be an element in the list of values from the new dict.
I manage to come with 4 different solutions, here is my example:
list_of_dicts = [
{'key_1': 'v2', 'key_2': 'some data 1', 'key_3': 'some random data'},
{'key_1': 'v1', 'key_2': 'some data 2'},
{'key_1': 'v1', 'key_2': 'some data 1'},
{'key_1': 'v2', 'key_2': 'some data 2'}]
Solution 1 using collections.defaultdict:
from collections import defaultdict
group_by_key_1 = defaultdict(list)
for d in list_of_dicts:
group_by_key_1[d['key_1']].append(d['key_2'])
group_by_key_1
output 1:
defaultdict(list,
{'v2': ['some data 1', 'some data 2'],
'v1': ['some data 2', 'some data 1']})
Solution 2 using dict.setdefault:
group_by_key_1 = {}
for d in list_of_dicts:
group_by_key_1.setdefault(d['key_1'], []).append(d['key_2'])
group_by_key_1
output 2:
{'v2': ['some data 1', 'some data 2'], 'v1': ['some data 2', 'some data 1']}
Solution 3, append if there is any element or add a list with the first element:
group_by_key_1 = {}
for d in list_of_dicts:
if d['key_1'] not in group_by_key_1:
group_by_key_1[d['key_1']] = [d['key_2']]
else:
group_by_key_1[d['key_1']].append(d['key_2'])
group_by_key_1
output 3:
{'v2': ['some data 1', 'some data 2'], 'v1': ['some data 2', 'some data 1']}
Solution 4, using itertools.groupby:
from itertools import groupby
from operator import itemgetter
list_of_dicts.sort(key=itemgetter('key_1'))
group = groupby(list_of_dicts, key=itemgetter('key_1'))
group_by_key_1 = dict((k, [e['key_2'] for e in v]) for k, v in group)
group_by_key_1
output 4:
{'v1': ['some data 2', 'some data 1'], 'v2': ['some data 1', 'some data 2']}
Usually, I use solution 1, but also solution 2 and 3 seems to be fine, however, what solution is the most efficient way? Or maybe there is another best solution?
I want to use one of the above solutions against a few millions of list_of_dicts, and one dict from the list_of_dicts can have between 10 and 1000 keys.
Upvotes: 3
Views: 168
Reputation: 17322
If the solutions are benchmarked for list_of_dicts size between 10 and 100_000 solution number 1 shows to be the most efficient:
from collections import defaultdict
from itertools import groupby
from operator import itemgetter
from simple_benchmark import BenchmarkBuilder
from random import randrange
b = BenchmarkBuilder()
@b.add_function()
def sol_1(list_of_dicts):
group_by_key_1 = defaultdict(list)
for d in list_of_dicts:
group_by_key_1[d['key_1']].append(d['key_2'])
return group_by_key_1
@b.add_function()
def sol_2(list_of_dicts):
group_by_key_1 = {}
for d in list_of_dicts:
group_by_key_1.setdefault(d['key_1'], []).append(d['key_2'])
return group_by_key_1
@b.add_function()
def sol_3(list_of_dicts):
group_by_key_1 = {}
for d in list_of_dicts:
if d['key_1'] not in group_by_key_1:
group_by_key_1[d['key_1']] = [d['key_2']]
else:
group_by_key_1[d['key_1']].append(d['key_2'])
return group_by_key_1
@b.add_function()
def sol_4(list_of_dicts):
list_of_dicts.sort(key=itemgetter('key_1'))
group = groupby(list_of_dicts, key=itemgetter('key_1'))
group_by_key_1 = dict((k, [e['key_2'] for e in v]) for k, v in group)
return group_by_key_1
@b.add_arguments('Size of list, number of keys')
def argument_provider():
for exp in range(2, 5):
size = 10**exp
keys_count = 1000
list_of_dicts = [{f'key_{i}': f'v{i}' for i in range(keys_count)} for _ in range(size)]
yield size, list_of_dicts
r = b.run()
r.plot()
output:
Upvotes: 1