Reputation: 11328
I'm testing defaultdict()
vs dict.setdefault()
efficiency using timeit
. For some reason, execution of timeit
returns two values;
from collections import defaultdict from timeit import timeit
dd = defaultdict(list)
ds = {}
def dset():
for i in xrange(100):
ds.setdefault(i, []).append(i)
def defdict():
for y in xrange(100):
dd[y].append(y)
Then I print the execution time of both functions and I get 4 values reteurned;
print timeit('dset()', setup='from def_dict import dset')
print timeit('defdict()', setup='from def_dict import defdict')
22.3247003333
23.1741990197
11.7763511529
12.6160995785
Time number executions of the main statement. This executes the setup statement once, and then returns the time it takes to execute the main statement a number of times, measured in seconds as a float. The argument is the number of times through the loop, defaulting to one million. The main statement, the setup statement and the timer function to be used are passed to the constructor.
I'm on Python 2.7.
Upvotes: 3
Views: 197
Reputation: 7750
It appears that when timeit imports your script indicated in the setup argument, it actually causes it to be called again. When I use your code in python-3, it gives me two values as well, but when it put the timeit statement in another script, it outputs one value each as it should. Additionally, if you add the line:
if __name__ == "__main__":
Just above your timeit statements, then this will also resolve the issue and will only call timeit once (each).
As far as why the two values differ, I would have to guess that the second is the original call made, and as such gets printed after the second returns and prints, which means it has the additional latency of a) output to the print buffer b) the time to return from the other call.
Upvotes: 1