Reputation: 10830
This is not a homework, I am just curious.
INFINITE is the key word here.
I wish to use it as for p in primes()
. I believe that this is a built-in function in Haskell.
So, the answer cannot be as naive as "Just do a Sieve".
First of all, you do not know how many consecutive primes will be consumed. Well, suppose you could concoct 100 of them at a time. Would you use the same Sieve approach as well as the frequency of prime numbers formula?
I prefer non-concurrent approach.
Thank you for reading (and writing ;) )!
Upvotes: 72
Views: 35809
Reputation: 71074
Since the OP asks for an efficient implementation, here's a significant improvement to the active state 2002 code by David Eppstein/Alex Martelli (seen here in his answer): don't record a prime's info in the dictionary until its square is seen among the candidates. Brings space complexity down to below O(sqrt(n)) instead of O(n), for n primes produced ( π(sqrt(n log n)) ~ 2 sqrt(n log n) / log(n log n) ~ 2 sqrt(n / log n) ). Consequently, time complexity is also improved, i.e. it runs faster.
Creates a "sliding sieve" as a dictionary of current multiples of each base prime (i.e. below the sqrt of the current production point), together with their step values:
from itertools import count
# ideone.com/aVndFM
def postponed_sieve(): # postponed sieve, by Will Ness
yield 2; yield 3; yield 5; yield 7; # original code David Eppstein,
sieve = {} # Alex Martelli, ActiveState Recipe 2002
ps = postponed_sieve() # a separate base Primes Supply:
p = next(ps) and next(ps) # (3) a Prime to add to dict
q = p*p # (9) its sQuare
for c in count(9,2): # the Candidate
if c in sieve: # c's a multiple of some base prime
s = sieve.pop(c) # i.e. a composite ; or
elif c < q:
yield c # a prime
continue
else: # (c==q): # or the next base prime's square:
s=count(q+2*p,2*p) # (9+6, by 6 : 15,21,27,33,...)
p=next(ps) # (5)
q=p*p # (25)
for m in s: # the next multiple
if m not in sieve: # no duplicates
break
sieve[m] = s # original test entry: ideone.com/WFv4f
(the older, original code here was edited to incorporate changes as seen in the answer by Tim Peters, below). see also this for a related discussion.
Similar 2-3-5-7 wheel-based code runs ~ 2.15x faster (which is very close to the theoretical improvement of 3/2 * 5/4 * 7/6 = 2.1875
).
2022 update: I've recently chanced upon this old "NESL" thing from the 1990s which actually uses the same sqrt
-recursion trick. So nothing is new under the sun. :)
The code can be straightforwardly augmented to start the primes enumeration directly from a given value. This can be seen in this JS-based entry.
Upvotes: 80
Reputation: 8180
I know the post is old, but I came across this question... The following code is based on a very simple idea: a growing sieve of Eratosthenes. Although this solution is slower than the best ones here, it is easy to grasp and designed to be readable...
I used integers to store the results of the sieve.
In binary format, an integer is a list of 0
s and 1
s, 0
at position i
if i
is not a prime, 1
if it may be a prime.
The requisite infinity is a result of the fact that Python 3 integers are unbounded.
def primes():
container, size = 1 << 2, 3 # we start with 0b100 (from right to left: 0 and 1 are not primes, 2 is
last_prime = 1
while True:
prime = next((j for j in range(last_prime+1, size) if container & 1 << j), None) # find the next prime
while not prime:
container, size = expand(container, size, 2**16) # add 65536 cells and sieve the container
prime = next((j for j in range(last_prime+1, size) if container & 1 << j), None)
yield prime
last_prime = prime
How to expand the container? Just add a bunch of 1
s at the left of the container (in binary format) and sieve them. This is
identical to the standard sieve, with a slight difference. In the standard sieve, if we find a prime i
, we start to cross the cells at i*i
, with a step of i
.
Here, this may have been done for the first part of container. We just need to start at the beginning of the new part of the container if it is farther than i*i
.
def expand(container, size, n):
new_size = size + n
container += (1 << (new_size + 1) - 1) - (1 << size) # add n 1's
for i in range(2, new_size):
if container & (1 << i): # i is a prime
t = sum(1 << j for j in range(max(i, size // i)*i, new_size, i)) # set 1 for all mutiple
container &= ~t # cross the cells
return container, new_size
Test for a million primes:
import itertools
assert 78498 == len(list(itertools.takewhile(lambda p: p<1000000, primes())))
Upvotes: 0
Reputation: 70602
For posterity, here's a rewrite of Will Ness's beautiful algorithm for Python 3. Some changes are needed (iterators no longer have .next()
methods, but there's a new next()
builtin function). Other changes are for fun (using the new yield from <iterable>
replaces four yield
statements in the original. More are for readability (I'm not a fan of overusing ;-) 1-letter variable names).
It's significantly faster than the original, but not for algorithmic reasons. The speedup is mostly due to removing the original's add()
function, doing that inline instead.
def psieve():
import itertools
yield from (2, 3, 5, 7)
D = {}
ps = psieve()
next(ps)
p = next(ps)
assert p == 3
psq = p*p
for i in itertools.count(9, 2):
if i in D: # composite
step = D.pop(i)
elif i < psq: # prime
yield i
continue
else: # composite, = p*p
assert i == psq
step = 2*p
p = next(ps)
psq = p*p
i += step
while i in D:
i += step
D[i] = step
Upvotes: 51
Reputation: 163
Here's a pretty fast infinite generator, written in Python2 but easily adjusted to Python3. To use it to add the primes up to 10**9, use the following:
from itertools import takewhile
from functools import partial
from operator import gt
print (sum(takewhile(partial(gt, 10**9), prime_gen_inf())))
It's a segmented sieve, faster but obviously less elegant than Will Ness's algorithm.
from operator import mul
from functools import reduce
def prod(x): return reduce(mul, x, 1)
def build_sieve(wheel):
w = prod(wheel)
w_phi = prod([p-1 for p in wheel])
rems = [a for a in range(w) if all(a % p for p in wheel)]
assert len(rems) == w_phi
inv = {a:pow(a, w_phi - 1, w) for a in rems}
try:
known_p = wheel + rems[1 : rems.index(rems[1]*rems[1])]
except ValueError:
known_p = wheel + rems[1:]
return wheel, w, w_phi, rems, inv, known_p
#Adjust the chunk variable based on your computer's architecture.
#
#Adjust the line with #! if you don't need "true" infinite. If you don't need
#primes larger than 1<<32, use array('H', []), if 1<<64 use 'L', if 1<<128 (in
#Python3) use 'Q', otherwise use empty list [].
#To save memory, comment out the lines with #*, and uncomment the commented-out
#lines
import itertools
from itertools import islice, count, compress, izip
chain_f = itertools.chain.from_iterable
from array import array
def prime_gen_inf(chunk=250000, sieve_info = build_sieve([2,3,5,7])):
""" Indefinitely yields primes """
wheel, w, w_phi, rems, inv, known_p = sieve_info
for p in known_p: yield p
new_n = 0;
while True:
size = min(chunk, (p * p - new_n) / w)
sieve = bytearray([1]) * size * w_phi
n, new_n = new_n, new_n + size * w
if not n:
zero = bytearray([0])
seen = len(known_p) - len(wheel) + 1
sieve[:seen:1] = zero * seen
p_gen = islice(prime_gen_inf(), len(wheel), None)
new_p = next(p_gen)
ps = [] #! array('H', [])
p_invs = bytearray([]) #*
while new_p * new_p < new_n:
ps.append(new_p)
p_invs.append(inv[new_p % w]) #*
new_p = next(p_gen)
for p, p_inv, modp in izip(ps, p_invs, [-n % p for p in ps]): #*
s = [(modp + p * (p_inv * (r - modp) % w)) / w for r in rems] #*
#for p in ps:
# s = [(-n%p + p * (inv[p%w] * (r - -n%p) % w)) / w for r in rems]
for i, start in enumerate(s):
slice_size = ((size - start - 1) / p + 1)
sieve[i + start * w_phi :: p * w_phi] = zero * slice_size
for p in compress(chain_f(izip(*[count(n+r, w) for r in rems])), sieve):
yield p
Upvotes: 2
Reputation: 87291
Here is a simple but not terribly slow one using a heap instead of a dict:
import heapq
def heap_prime_gen_squares():
yield 2
yield 3
h = [(9, 6)]
n = 5
while True:
a, b = h[0]
while n < a:
yield n
heapq.heappush(h, (n * n, n << 1))
n += 2
heapq.heapreplace(h, (a + b, b)) # Replace h[0], which is still (a, b).
My speed measurements of user time for the first 1 million primes (smaller numbers are better):
So dict-based approaches seem to be the fastest.
Upvotes: 1
Reputation: 87291
Here is a complicated heap-based implementation, which is not much faster than other heap-based implementations (see the speed comparison in another answer of mine), but it uses much less memory.
This implementation uses two heaps (tu and wv), which contain the same number elements. Each element is an int pair. In order to find all primes up to q**2
(where q
is a prime), each heap will contain at most 2*pi(q-1)
elements, where pi(x)
is the number of positive primes not larger than x
. So the total number of integers is at most 4*pi(floor(sqrt(n)))
. (We could gain a factor on 2 on memory by pushing half as much stuff to the heap, but that would make the algorithm slower.)
Other dict and heap-based approaches (e.g. erat2b, and heap_prime_gen_squares and heapprimegen) above store about `2*pi(n)' integers, because they extend their heap or dict every time they find a prime. As a comparison: to find the 1_000_000 primes, this implementation stores less than 4141 integers, other implementations store more than 1_000_000 integers.
import heapq
def heap_prime_gen_smallmem():
yield 2
yield 3
f = 5
fmar3 = 2
q = 7
q6 = 7 * 6
qmar3 = 4
tu = [(25, 30), (35, 30)]
vw = [(25, 30), (35, 30)]
while True:
qmar3 += 2
if qmar3 == 6:
qb = q + 4
q6b = q6 + 24
qmar3 = 2
else:
qb = q + 2
q6b = q6 + 12
if q < tu[0][0]:
d = q * q
while f < d:
a, b = vw[0]
if f < a:
yield f
else:
a, b = vw[0]
heapq.heapreplace(vw, (a + b, b))
a, b = vw[0]
while f >= a:
heapq.heapreplace(vw, (a + b, b))
a, b = vw[0]
fmar3 += 2
if fmar3 == 6:
f += 4
fmar3 = 2
else:
f += 2
c = q * qb
heapq.heappush(tu, (d, q6))
heapq.heappush(tu, (c, q6))
heapq.heappush(vw, (d, q6))
heapq.heappush(vw, (c, q6))
else:
a, b = tu[0]
heapq.heapreplace(tu, (a + b, b))
a, b = tu[0]
while q >= a:
heapq.heapreplace(tu, (a + b, b))
a, b = tu[0]
q = qb
q6 = q6b
Upvotes: 2
Reputation: 95971
And another answer, more memory-efficient than my erat3
answer here:
import heapq
def heapprimegen():
hp= []
yield 2
yield 3
cn= 3
nn, inc= 3, 6
while 1:
while cn < nn:
yield cn
heapq.heappush(hp, (3*cn, 2*cn))
cn+= 2
cn= nn+2
nn, inc= heapq.heappushpop(hp, (nn+inc, inc))
It maintains a heap (a list) of prime multiples rather than a dictionary. It loses some speed, obviously.
Upvotes: 2
Reputation: 56792
Do a segmented sieve, where the size of a segment is determined by available memory or the maximal size of a bitset.
For each segment represent the numbers in some interval [n; n + segment_size) as a bit set and sieve with all prime numbers below the square root of the upper bound.
Using a bit set uses less memory than a hash table or tree data structure, because you are working with dense sets of numbers.
Upvotes: 6
Reputation: 3830
Another way to do it:
import itertools
def primeseq():
prime = [2]
num = 0
yield 2
for i in itertools.count(3, 2):
is_prime = True
for num in prime:
if i % num == 0:
is_prime = False
break
elif num ** 2 > i:
break
if is_prime:
prime.append(i)
yield i
Upvotes: 2
Reputation: 95971
The erat2
function from the cookbook can be further sped up (by about 20-25%):
import itertools as it
def erat2a( ):
D = { }
yield 2
for q in it.islice(it.count(3), 0, None, 2):
p = D.pop(q, None)
if p is None:
D[q*q] = q
yield q
else:
# old code here:
# x = p + q
# while x in D or not (x&1):
# x += p
# changed into:
x = q + 2*p
while x in D:
x += 2*p
D[x] = p
The not (x&1)
check verifies that x
is odd. However, since both q
and p
are odd, by adding 2*p
half of the steps are avoided along with the test for oddity.
If one doesn't mind a little extra fanciness, erat2
can be sped up by 35-40% with the following changes (NB: needs Python 2.7+ or Python 3+ because of the itertools.compress
function):
import itertools as it
def erat3( ):
D = { 9: 3, 25: 5 }
yield 2
yield 3
yield 5
MASK= 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0,
MODULOS= frozenset( (1, 7, 11, 13, 17, 19, 23, 29) )
for q in it.compress(
it.islice(it.count(7), 0, None, 2),
it.cycle(MASK)):
p = D.pop(q, None)
if p is None:
D[q*q] = q
yield q
else:
x = q + 2*p
while x in D or (x%30) not in MODULOS:
x += 2*p
D[x] = p
The erat3
function takes advantage of the fact that all primes (except for 2, 3, 5) modulo 30 result to only eight numbers: the ones included in the MODULOS
frozenset. Thus, after yielding the initial three primes, we start from 7 and work only with the candidates.
The candidate filtering uses the itertools.compress
function; the “magic” is in the MASK
sequence; MASK
has 15 elements (there are 15 odd numbers in every 30 numbers, as chosen by the itertools.islice
function) with a 1
for every possible candidate, starting from 7. The cycle repeats as specified by the itertools.cycle
function.
The introduction of the candidate filtering needs another modification: the or (x%30) not in MODULOS
check. The erat2
algorithm processed all odd numbers; now that the erat3
algorithm processes only r30 candidates, we need to make sure that all D.keys()
can only be such —false— candidates.
On an Atom 330 Ubuntu 9.10 server, versions 2.6.4 and 3.1.1+:
$ testit
up to 8192
==== python2 erat2 ====
100 loops, best of 3: 18.6 msec per loop
==== python2 erat2a ====
100 loops, best of 3: 14.5 msec per loop
==== python2 erat3 ====
Traceback (most recent call last):
…
AttributeError: 'module' object has no attribute 'compress'
==== python3 erat2 ====
100 loops, best of 3: 19.2 msec per loop
==== python3 erat2a ====
100 loops, best of 3: 14.1 msec per loop
==== python3 erat3 ====
100 loops, best of 3: 11.7 msec per loop
On an AMD Geode LX Gentoo home server, Python 2.6.5 and 3.1.2:
$ testit
up to 8192
==== python2 erat2 ====
10 loops, best of 3: 104 msec per loop
==== python2 erat2a ====
10 loops, best of 3: 81 msec per loop
==== python2 erat3 ====
Traceback (most recent call last):
…
AttributeError: 'module' object has no attribute 'compress'
==== python3 erat2 ====
10 loops, best of 3: 116 msec per loop
==== python3 erat2a ====
10 loops, best of 3: 82 msec per loop
==== python3 erat3 ====
10 loops, best of 3: 66 msec per loop
A primegen.py
module contains the erat2
, erat2a
and erat3
functions. Here follows the testing script:
#!/bin/sh
max_num=${1:-8192}
echo up to $max_num
for python_version in python2 python3
do
for function in erat2 erat2a erat3
do
echo "==== $python_version $function ===="
$python_version -O -m timeit -c \
-s "import itertools as it, functools as ft, operator as op, primegen; cmp= ft.partial(op.ge, $max_num)" \
"next(it.dropwhile(cmp, primegen.$function()))"
done
done
Upvotes: 85
Reputation: 15416
This isn't originally my code, however, it's worth posting. The original can be found here: http://code.activestate.com/recipes/117119/
def gen_primes():
D = {}
q = 2 # first integer to test for primality.
while True:
if q not in D:
# not marked composite, must be prime
yield q
#first multiple of q not already marked
D[q * q] = [q]
else:
for p in D[q]:
D.setdefault(p + q, []).append(p)
# no longer need D[q], free memory
del D[q]
q += 1
It's a generator, so use it like any other.
primes = gen_primes()
for p in primes:
print p
It takes 1.62s to generate and put into a set, 1 million primes, on my desktop.
Upvotes: 8
Reputation: 1922
Here's a generator that's a little truer to how it's done in Haskell: filtering against composites of known primes, then adding the remaining primes to the list.
def gen_primes():
primes = []
i = 2
while True:
prime = True
for p in primes:
if not (i % p):
prime = False
break
if prime:
yield i
primes.append(i)
i += 1
Upvotes: 0