Reputation: 2565
Given the following list of lists
arrayNumbers = [[32,3154,53,13],[44,34,25,67], [687,346,75], [57,154]]
how can I efficiently get the number of lists having only 4 items?
In this case, that would be arrayNumbers_len = 2
. I can do this using a loop, but that is not efficient at all. Since the lengths of my real arrays are in the millions, I need a way to do this extremely fast.
Here is my current solution:
batchSize = 4
counter = 0
for i in range(len(arrayNumbers)):
if (len(arrayNumbers[i]) == batchSize):
counter += 1
Any suggestions?
Upvotes: 3
Views: 228
Reputation: 1143
Using filter
with an anonymous function:
>>> Numbers = [[32,3154,53,13],[44,34,25,67],[687,346,75],[57,154]]
>>> filter(lambda x: len(x) == 4, Numbers)
[[32, 3154, 53, 13], [44, 34, 25, 67]]
>>> len(filter(lambda x: len(x) == 4, Numbers))
2
Upvotes: 1
Reputation: 53069
Here is a numpy solution. It is only marginally slower than the best of the non-numpy answers. One advantage would be that one could get the counts for all lengths at minimal additional cost unless there are ridiculously large sublists:
>>> import numpy as np
>>> from timeit import timeit
>>> from collections import Counter
>>>
>>> lengths = np.random.randint(0, 100, (100_000))
>>> lists = [l * ['x'] for l in lengths]
>>>
>>>
# count one
# best Python
>>> list(map(len, lists)).count(16)
974
# numpy
>>> np.count_nonzero(16==np.fromiter(map(len, lists), int, len(lists)))
974
>>>
# count all
# best Python
>>> [cc for c, cc in sorted(Counter(map(len, lists)).items())]
[973, 1007, 951, 962, 1039, 962, 1028, 999, 970, 997,
...
1039, 997, 976, 1028, 1026, 969, 1106, 994, 1002, 1022]
>>>
# numpy
>>> np.bincount(np.fromiter(map(len, lists), int, len(lists)))
array([ 973, 1007, 951, 962, 1039, 962, 1028, 999, 970, 997,
...
1039, 997, 976, 1028, 1026, 969, 1106, 994, 1002, 1022])
Timings:
>>> kwds = dict(globals=globals(), number=100)
>>>
>>> timeit('list(map(len, lists)).count(16)', **kwds)
0.38265155197586864
>>> timeit('np.count_nonzero(16==np.fromiter(map(len, lists), int, len(lists)))', **kwds)
0.4332483590114862
>>>
>>> timeit('Counter(map(len, lists))', **kwds)
0.673214758047834
>>> timeit('np.bincount(np.fromiter(map(len, lists), int, len(lists)))', **kwds)
0.43800772598478943
Upvotes: 2
Reputation: 43524
I ran my own tests in python 2 and it appears that list comprehension (@DBedrenko's updated solution) is the fastest with @Prune's map(len, arraynumbers).count(4)
coming in second:
nLists = 1000000
arrayNumbers = [[np.random.randint(0, 10)]*np.random.randint(0, 10) for _ in range(nLists)]
batchSize = 4
In [67]:
%%timeit
counter = 0
for i in range(len(arrayNumbers)):
if (len(arrayNumbers[i]) == batchSize):
counter += 1
10 loops, best of 3: 108 ms per loop
In [68]:
%%timeit
map(len, arrayNumbers).count(4)
10 loops, best of 3: 65.7 ms per loop
In [69]:
%%timeit
len(list(filter(lambda l: len(l) == 4, arrayNumbers)))
10 loops, best of 3: 121 ms per loop
In [70]:
%%timeit
len([l for l in arrayNumbers if len(l) == 4])
10 loops, best of 3: 58.6 ms per loop
In [71]:
%%timeit
sum(len(i)==4 for i in arrayNumbers)
10 loops, best of 3: 97.8 ms per loop
Upvotes: 2
Reputation: 96142
I went ahead and did some timings to show how these different approaches vary.
Note: var_arr
has a million randomly-sized sublists:
In [31]: def for_loop(var_arr, batchsize):
...: count = 0
...: for x in var_arr:
...: if len(x) == batchsize:
...: count += 1
...: return count
...:
In [32]: def with_map_count(var_arr, batchsize):
...: return list(map(len, var_arr)).count(batchsize)
...:
In [33]: def lambda_filter(var_arr, batchsize):
...: len(list(filter(lambda l: len(l) == batchsize, var_arr)))
...:
In [34]: def sum_gen(var_arr, batchsize):
...: sum(len(x) == batchsize for x in var_arr)
...:
In [35]: from collections import Counter
...: def with_counter(var_arr, batchsize):
...: Counter(map(len, var_arr)).get(batchsize, 0)
...:
In [36]: %timeit for_loop(var_arr, 4)
82.9 ms ± 1.23 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [37]: %timeit with_map_count(var_arr, 4)
48 ms ± 873 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [38]: %timeit lambda_filter(var_arr, 4)
172 ms ± 3.76 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [39]: %timeit sum_gen(var_arr, 4)
150 ms ± 3.12 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [40]: %timeit with_counter(var_arr, 4)
75.6 ms ± 1.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Some more timings:
In [50]: def with_list_comp_filter(var_arr, batchsize):
...: return len([x for x in var_arr if len(x) == batchsize])
...:
...: def with_list_comp_filter_map(var_arr, batchsize):
...: return len([x for x in map(len, var_arr) if x == batchsize])
...:
...: def loop_with_map(var_arr, batchsize):
...: count = 0
...: for x in map(len, var_arr):
...: count += x == batchsize
...: return count
...:
In [51]: %timeit with_list_comp_filter(var_arr, 4)
87.8 ms ± 4.35 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [52]: %timeit with_list_comp_filter_map(var_arr, 4)
62.7 ms ± 1.63 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [53]: %timeit loop_with_map(var_arr, 4)
91.9 ms ± 1.43 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Upvotes: 3
Reputation: 5029
Would this be of acceptable performance?
len([l for l in arrayNumbers if len(l) == 4])
If this is still too slow, you can write the algorithm in C or C++, and call this from your Python code. See more here for details: https://docs.python.org/3.6/extending/extending.html
Upvotes: 3
Reputation: 77875
Sorry, but just in raw Information Science terms, you're stuck with an O(N) problem, where N is the number of elements in your list. You have to access each length to test it against batchSize
. With that, however, we can stuff it into a one-liner that gives Python a chance to optimize as best it can:
map(len, arraynumbers).count(4)
Upvotes: 3