Reputation: 446
Is
list(set(some_list))
a good way to remove duplicates from a list? (Python 3.3 if that matters)
(Edited to address some of the comments... it was perhaps too terse before).
Specifically,
Upvotes: 9
Views: 16780
Reputation: 308111
The method you show is probably shortest and easiest to understand; that would make it Pythonic by most definitions.
If you need to preserve the order of the list, you can use collections.OrderedDict
instead of set
:
list(collections.OrderedDict((k, None) for k in some_list).keys())
Edit: as of Python 3.7 (or 3.6 if you're trusting) it's not necessary to use OrderedDict
; a regular dict
shares the property of retaining insertion order. So you can rewrite the above:
list({k: None for k in some_list}.keys())
If the elements aren't hashable but can be sorted, you can use itertools.groupby
to remove duplicates:
list(k for k,g in itertools.groupby(sorted(some_list)))
Edit: the above can be written as a list comprehension which some might consider more Pythonic.
[k for k,_ in itertools.groupby(sorted(some_list))]
Upvotes: 5
Reputation: 6268
To preserve order the shortest (starting from Python 2.7):
>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']
If there is no need to preserve order list(set(...))
is just fine.
Upvotes: 1
Reputation: 2825
(As suggested in the comments, adding this comment as an answer as well.)
Your own solution looks good and pretty Pythonic to me. If you're using Numpy, you can also do new_list = numpy.unique(some_list)
. This more or less 'reads like a sentence', which I believe is always a good benchmark for something being "Pythonic".
Upvotes: 4