Reputation: 25
I am using the code beneath to scan through a dictionary and remove duplicate lists.
nduplicates = {k:[list(y) for y in {tuple(x[1:]) for x in v}] for k,v in resulsts.items()}
The dictionary results
is in this format:
{'example': [['london','5.123', '-3.123'],['bham','5.123', '-3.123'],['manc','51.23', '-3.453']], [etc..]}
Applying the list comprehension works and removes the duplicate nested lists, exclusing the first element; leaving the dictionary like this:
{'example': [['london','5.123', '-3.123'],['manc','51.23', '-3.453']]}
I was wondering if there is a different way to go about removing the duplicates opposed to the already working solution. I have also tried, but this isn't fully working anyway:
print({k: [y for x, y in enumerate(v) \
if y not in v[1:x]] for k, v in results.items()})
Thanks for any help! Any other work arounds for the list comprehension/remove duplicate code would be appreciated!
Upvotes: 0
Views: 79
Reputation: 2255
Check my codes:
def remove_dup(my_lst):
from copy import deepcopy
from collections import OrderedDict
my_lst = list(reversed(deepcopy(my_lst)))
ordred_dict = OrderedDict()
for sub_list in my_lst:
ordred_dict[tuple(sub_list[1:])] = sub_list
return list(ordred_dict.values())
def main():
results = {'example': [['london', '5.123', '-3.123'], ['bham', '5.123', '-3.123'], ['manc', '51.23', '-3.453']]}
nduplicates = {k: remove_dup(v) for k, v in results.items()}
print(nduplicates)
if __name__ == "__main__":
main()
and you got:
{'example': [['manc', '51.23', '-3.453'], ['london', '5.123', '-3.123']]}
The codes about reverse the list guarantees that if a duplicate is found, keep the first element.
Upvotes: 1