Reputation: 9213
I am creating a new list from a large old list based on a certain value being not equal to None. Which method of iteration is faster?
Option 1:
new_list = []
for values in old_list:
if values[4] is not None:
new_list.append(values[4])
Option 2:
new_list = [x for x in old_list if x[4] is not None]
Upvotes: 1
Views: 114
Reputation: 23058
Try to timeit
both.
But second is widely known to be faster.
Basically map
is faster than list comprehension
which is faster than for loop
.
A whole lot of literature is available on the web about this subject.
EDIT:
I promised an update with actual, tangible results. Here is the code.
import random
import timeit
old_list = [ random.randint(0, 100000) for i in range(0, 100) ]
def floop(old_list):
new_list = []
for value in old_list:
new_list.append(value)
return new_list
def lcomp(old_list):
new_list = [ value for value in old_list ]
return new_list
if __name__=='__main__':
results_floop = timeit.Timer('floop(old_list)', "from __main__ import floop, old_list").timeit()
results_lcomp = timeit.Timer('lcomp(old_list)', "from __main__ import lcomp, old_list").timeit()
print("Function\t\tSeconds elapsed")
print("For loop\t\t{}".format(results_floop))
print("List comp\t\t{}".format(results_lcomp))
Remember: timeit
loops through the called function 1 million times and prints the time elapsed in seconds. Read it like to execute this 1 million times, it took xx seconds.
Here are the results. I think they speak by themselves.
~/python » python3 lists.py
Function Seconds elapsed
For loop 11.089475459069945
List comp 5.985794545034878
Upvotes: 5
Reputation: 11060
The second is both faster, and more readable. If you need more speed, and are only iterating through the result once, you could use filter
- new_list = filter(lambda x: x[4] is not None, old_list)
. You could call list
on the filtered result, but that might not have any speed advantage over the list comprehension (and is less Pythonic IMHO).
Upvotes: 1