John Snow
John Snow

Reputation: 2018

What method will make python count the fastest?

I'm doing a chemistry lesson on the mole and how big of a number it is. I came across text that said:

A computer that can count 10,000,000 atoms per second would take 2,000,000,000 years to count 1 mole of a substance

I thought it would be cool to actually demonstrate that to the class. So I have the script:

import time

t_end = time.time() + 1
i=0

while time.time() < t_end:
    i += 1

print(i)

Which prints the result:6225324.That's about the right order of magnitude, but definitely lower than the statement. What would be a more efficient way to write this?

Upvotes: 2

Views: 302

Answers (2)

dawg
dawg

Reputation: 103884

If you just want to demonstrate that Avogadro's Number's is a really big number, you can do something like this:

import time 
avo=602214085700000000000000
sec_per_year=60*60*24*365.25

t0=time.time()
for i in range(avo):    
    if i and i%10000000==0:
        t=time.time()-t0
        avg_per_sec=i/t
        per_year=avg_per_sec*sec_per_year
        print("{:15,} in {:,.2f} sec -- only {:,} more to go! (and {:,.2f} years)".format(i, t, avo-i,avo/per_year))

Prints:

 10,000,000 in 2.17 sec -- only 602,214,085,699,999,990,000,000 more to go! (and 4,140,556,225.48 years)
 20,000,000 in 4.63 sec -- only 602,214,085,699,999,980,000,000 more to go! (and 4,422,153,353.15 years)
 30,000,000 in 7.12 sec -- only 602,214,085,699,999,970,000,000 more to go! (and 4,530,778,737.84 years)
 40,000,000 in 9.58 sec -- only 602,214,085,699,999,960,000,000 more to go! (and 4,571,379,181.80 years)
 50,000,000 in 12.07 sec -- only 602,214,085,699,999,950,000,000 more to go! (and 4,605,790,562.41 years)
 ...

With PyPy or Python2 you need to use a while loop because xrange overflows with avogadros number:

from __future__ import print_function

import time 
avo=602214085700000000000000
sec_per_year=60*60*24*365.25

t0=time.time()
i=0
while i<avo:
    i+=1
    if i and i%100000000==0:
        t=time.time()-t0
        avg_per_sec=i/t
        per_year=avg_per_sec*sec_per_year
        print("{:15,} in {:,.2f} sec -- only {:,} more to go! (and {:,.2f} years)".format(i, t, avo-i,avo/per_year))

On PyPy, you can almost see the end!

Prints:

100,000,000 in 0.93 sec -- only 602,214,085,699,999,900,000,000 more to go! (and 176,883,113.10 years)
200,000,000 in 1.85 sec -- only 602,214,085,699,999,800,000,000 more to go! (and 176,082,858.48 years)
300,000,000 in 2.76 sec -- only 602,214,085,699,999,700,000,000 more to go! (and 175,720,835.29 years)
400,000,000 in 3.68 sec -- only 602,214,085,699,999,600,000,000 more to go! (and 175,355,661.40 years)
500,000,000 in 4.59 sec -- only 602,214,085,699,999,500,000,000 more to go! (and 175,114,044.92 years)
600,000,000 in 5.49 sec -- only 602,214,085,699,999,400,000,000 more to go! (and 174,641,142.93 years)
700,000,000 in 6.44 sec -- only 602,214,085,699,999,300,000,000 more to go! (and 175,612,486.37 years)

Upvotes: 2

Jean-Fran&#231;ois Fabre
Jean-Fran&#231;ois Fabre

Reputation: 140196

there's too much overhead in your code because of the time.time() polling. It's a system call and it's not free, so it dwarves your counter and biases your measure.

The best way I can think of in python is:

import time

start_time = time.time()
for i in range(1,10000000):  # use xrange if you run python 2 or you'll have problems!!
    pass
print("counted 10 million in {} seconds".format(time.time()-start_time))

On my computer, it took 0.5 seconds to do so.

Of course, python being interpreted, you'll have better results running it with pypy, or better: with compiled languages like C (or even Java which has JIT).

Upvotes: 3

Related Questions