Randomblue
Randomblue

Reputation: 116283

Algorithm timing in Python

I want to compute how many times my computer can do counter += 1 in one second. A naive approach is the following:

from time import time

counter = 0
startTime = time()

while time() - startTime < 1:
    counter += 1

print counter

The problem is time() - startTime < 1 may be considerably more expensive than counter += 1.

Is there a way to make a less "clean" 1 sec sample of my algorithm?

Upvotes: 1

Views: 5700

Answers (4)

Kyoko Sasagava
Kyoko Sasagava

Reputation: 121

Here is my approach

import time

m = 0
timeout = time.time() + 1 

while True:
    if time.time() > timeout:
        break
    m = m + 1
print(m)

Upvotes: 0

Palantir
Palantir

Reputation: 101

I have never worked with the time() library, but according to that code I assume it counts seconds, so what if you do the /sec calculations after ctrl+C happens? It would be something like:

#! /usr/bin/env python

from time import time
import signal
import sys

#The ctrl+C interruption function:
def signal_handler(signal, frame):
    counts_per_sec = counter/(time()-startTime)
    print counts_per_sec
    exit(0)
signal.signal(signal.SIGINT, signal_handler)

counter = 0
startTime = time()
while 1:
    counter = counter + 1

Of course, it wont be exact because of the time passed between the last second processed and the interruption signal, but the more time you leave the script running, the more precise it will be :)

Upvotes: 0

Escualo
Escualo

Reputation: 42082

Why don't you infer the time instead? You can run something like:

from datetime import datetime

def operation():
    counter = 0
    tbeg = datetime.utcnow()
    for _ in range(10**6):
        counter += 1
    td = datetime.utcnow() - tbeg
    return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6)/10.0**6

def timer(n):
    stack = []
    for _ in range(n):        
        stack.append(operation()) #  units of musec/increment
    print sum(stack) / len(stack)

if __name__ == "__main__":
    timer(10)

and get the average elapsed microseconds per increment; I get 0.09 (most likely very inaccurate). Now, it is a simple operation to infer that if I can make one increment in 0.09 microseconds, then I am able to make about 11258992 in one second.

I think the measurements are very inaccurate, but maybe is a sensible approximation?

Upvotes: 1

Sven Marnach
Sven Marnach

Reputation: 601629

The usual way to time algorithms is the other way around: Use a fixed number of iterations and measure how long it takes to finish them. The best way to do such timings is the timeit module.

print timeit.timeit("counter += 1", "counter = 0", number=100000000)

Note that timing counter += 1 seems rather pointless, though. What do you want to achieve?

Upvotes: 10

Related Questions