Reputation: 892
I'm Not quite sure what I'm missing here, but looks like time
from shell reports higher values for commands that timeit
from python report less values for. And practically looks like the time
numbers are correct. What is happening?
time python -m timeit 'd={"i": 1}; d.update({"i": 2, "j": 3, "k": 4})'
leads to:
1000000 loops, best of 3: 0.373 usec per loop (timeit)
1.54s user 0.00s system 99% cpu 1.545 total (time)
time python -m timeit 'd={"i": 1}; d = dict(d, **{"i": 2, "j": 3, "k": 4})'
leads to:
1000000 loops, best of 3: 0.43 usec per loop (timeit)
1.77s user 0.00s system 99% cpu 1.778 total (time)
time python -m timeit 'd={"i": 1}; d["i"] = 2; d["j"] = 3; d["k"] = 4'
leads to:
10000000 loops, best of 3: 0.145 usec per loop (timeit)
5.98s user 0.00s system 99% cpu 5.986 total (time)
Upvotes: 1
Views: 40
Reputation: 892
Looks like the result is not in contradiction, as the 3rd case is ran 10x more times at 10,000,000 times while the case 1 and 2 ran 1,000,000 times.
Upvotes: 0
Reputation: 295520
time
tells you how long it takes to run the benchmark.timeit
tells you how long it takes to run the code the benchmark is built to measure.These are two very different things. time
includes all the overhead of starting the interpreter; of collecting the statistics on how long your individual benchmarked runs take; of interpreting those results; of doing any additional runs which got thrown out because they were unstable or otherwise statistically unsuitable; of performing garbage collection between timed invocations; etc.
Upvotes: 3