MNRC
MNRC

Reputation: 175

Time complexity of an algorithm to find prime numbers

I was curious about prime numbers and would like to know the most efficient way to find relatively small prime numbers for a range up to say, 10 million. I read that the sieve of erastosthenes (SOE) is the most efficient way to find smaller prime numbers. I implemented SOE using python but had a few questions:

  1. The worst case running time of my algorithm seems to be O(n^2). I'm still learning, so I know this algorithm can be made more efficient.

  2. Is there a difference in the most efficient mathematical way and most efficient programming way to find prime numbers? Mathematically, SOE is one of the fastest, but programming-wise is SOE all that fast?

    def FindPrime(n):
        primes = [2, 3]
        for num in range(4, n):
            notprime = False
            for p in primes:
                if num % p == 0:
                    notprime = True
            if notprime == False:
                primes.append(num)
    
        print primes
    
    print FindPrime(100)
    

Upvotes: 2

Views: 6912

Answers (2)

Spektre
Spektre

Reputation: 51863

  1. uʍop ǝpısdn is right your code is not SOE

  2. you can find mine SOE implementation here

    • which makes the prime finding more efficient then yours solution
  3. complexity of mine implementation of SOE

    • time: T(0.5·n·DIGAMMA(CEIL(SQRT(n))+0.3511·n) if the sqrt(N) is used like suggested in comment inside code
    • time(n= 1M): T(3.80*n)
    • time(n= 10M): T(4.38*n)
    • time(n= 100M): T(4.95*n)
    • time(n=1000M): T(5.53*n)
    • so approximately the run time is: T((0.3516+0.5756*log10(n))*n)
    • so the complexity is O(n.log(n))
  4. difference between speed (runtime) and complexity O()

    • the actual runtime is t=T(f(n))*c
    • for big enough n it converges to t=O(f(n))*c
    • where O() is the time complexity of algorithm
    • and T() is actual run time equation for any n (not O() !!!)
    • the c is some constant time which is needed to process single pass in all fors together etc...
    • better O() does not mean faster solution
    • for any n only after treshold where
    • O1(f1(n))*c1 < O2(f2(n))*c2
    • so if you well optimize the c constant then you can beat better complexity algorithms up to a treshold
    • for example your code is around T(n.n/2) -> O(n^2)
    • but for low n can be faster then mine SOE O(n.log(n))
    • because mine needs to prepare tables which takes more time then your divison up to a point
    • but after that yours get much much slower ...

So for the question if there is difference between most efficient math and programing solution

  • the answer is YES it can be on defined N-range

Upvotes: 0

salezica
salezica

Reputation: 77029

First of all, you should know that your algorithm isn't the sieve of Eratosthenes. You're using trial division.

There are a number of improvements that can be made to your implementation.

  1. Use xrange(), which is O(1) memory-wise, not range(), which is O(n).

  2. Skip even numbers in your search: xrange(4, n, 2) steps 2 at a time.

  3. Don't test if a prime p divides n when p > sqrt(n). It is not possible.

As you predicted, these changes don't affect the order of complexity, but you'll see a solid performance improvement.

As for a faster algorithm, first implement a real sieve of Eratosthenes, then try the much faster sieve of Atkin.

Upvotes: 4

Related Questions