Reputation: 295
Suppose you have a program one_factor(N)
that, given an n
-digit binary number, N
, returns one of the prime factors of the number in Theta(n^2)
time (note that I used a lowercase n
in my theta notation.)
Using this algorithm, I want to find an efficient algorithm to factor an n
-digit binary number into primes and print the prime factors. Moreover, I want to know how fast the algorithm runs as a function of n
, and I also want to compute the approximate number of times my algorithm uses the one_factor(N)
oracle in the worst case as a function of n
.
Assume it takes Theta(n)
time to add/divide/multiply/subtract two n
-digit binary numbers.
Here is an algorithm that works:
one_factor(N)
. If the function returns N
, then N
is prime, and we're done. Otherwise, divide out this prime factor and store it somewhere. Repeat this procedure until we're done.Now I'm having trouble analyzing the runtime of this procedure as a function of n
. For simplicity, if N
is a power of two, then n = log_2(N)
, but I'm not really so sure how to proceed from here.
I don't understand how to find the worst-case number of times we call one_factor
, and I'm having trouble analyzing the worst-case running time as well.
Upvotes: 2
Views: 226
Reputation: 46445
First, the oracle is wonderful but slow. So you want to try to do trial division first.
From Arithmetic functions in Wikipedia if you use Newton–Raphson division and the Harvey-Hoeven algorithm, checking whether you are evenly divisible by a prime p
can be done in time O(n log(n))
. Computing the O(n / log(n))
primes up to n
is doable with the sieve of Eratosthenesin time well less than O(n^2)
. Doing the trial divisions can be done in time O(n^2)
. (Using more practical multiplication algorithms, you can do it in slightly worse time. It won't make a difference to the overall result.)
Now you refer to your oracle. Each oracle takes time O(n^2)
and the division is small compared to that. It reduces the size of the number by at least a factor of n
. How divisions are needed? O(log_n(2^n)) = O(log(2^n)/log(n)) = O(n/log(n))
. And each division is O(n log(n))
per the same algorithm as above.
The resulting algorithm is O(n^2 * (n * log(n)) * n/log(n)) = O(n^4)
. Note that the /log(n)
is the savings from doing trial division before having to consult the oracle.
Note that you cannot improve this by doing more trial divisions because if you try to go through all of the primes up to n^2
you have spent more time in trial division than your original algorithm and the big-O of the oracle divisions part is still the same.
Upvotes: 3
Reputation: 28278
Note that after division, your number N becomes N/2 or smaller. So you will need a maximum of log2(N) oracle invocations. This is O(n). Each invocation takes O(n²) time or less ("less" may be necessary to mention depending on your definition of O).
If your N is a power of 2, it realizes the worst case of number of invocations. By luck, it also realizes the worst case for execution time at each invocation, because during the first half of them, the number is big enough — it has at least n/2 digits.
Upvotes: 2