Reputation: 21
I am currently learning about Big O Notation running times and amortized times.
I have the following question:
two algorithms based on the principle of Divide & Conquer are available to solve a problem of complexity n. Algorithm 1 divide the problem into 18 small problems and requires O (n^2) operations to combine the sub-solutions together. Algorithm 2 divide the problem into 64 small problems and requires O(n) operations to combine the sub-solutions together.
Which algorithm is better and faster (for large n)?
I'm guessing that the second Algorithm is better because it requires less time (O(n) is faster than O(n^2)). Am I correct in my guess?
Does the number of small problems play a role in the speed of Algorithm or does it always require a constant Time?
Upvotes: 0
Views: 165
Reputation: 758
The Master theorem is used for asymptotic analysis for divide and conquer algorithms and will provide a way for you to get a direct answer rather than guessing.
T(n) = aT(n/b) + f(n)
where T is the main problem, n is the set of input, a is the number of subproblems you divide into, b is the factor that your input set is decreased by for each subproblem, and f(n) is the function to split and combine subproblems together. From here we find c:
f(n) is O(n^c)
For example, in your example algorithm 1, c = 2, and in algorithm 2, c = 1. The value a is 18 and 64 for algorithm 1 and 2 respectively. The next part is where your problem is missing the appropriate information since b is not provided. In other words, to get a clear answer, you need to know the factor that each subproblem divides the original input.
if c < logb(a) then T(n) is O(n^logb(a))
if c = logb(a) then T(n) is O(n^c log(n))
if c > logb(a) then T(n) is O(f(n))
Upvotes: 0
Reputation: 64904
In this case it's probably not intended to be a trap, but it's good to be careful and some counter-intuitive things can happen. The trap, if it happens, is mostly this: how much smaller do the sub-problems get, compared to how many of them are generated?
For example, it is true for Algorithm 1 here that if sub-problems are 1/5th of the size of the current problem or smaller (and perhaps they meant they would be 1/18th the size?), then overall the time complexity is in O(n²). But if the size of the problem only goes down by a factor of 4, we're already up to O(n2.085), and if the domain is only cut into half (but still 18 times) then it goes all the way up to O(n4.17).
Similarly for Algorithm 2, sure if it cuts a program into 64 sub problems that are each 1/64th of the size, the overall time complexity would be in O(n log n). But if the sub problems are even a little bit bigger, say 1/63rd of the size, immediately we go up a whole step in the hierarchy to O(n1.004) - a tiny constant in the exponent still, but no longer loglinear. Make the problems 1/8th the size and the complexity becomes quadratic, and if we go to a mere halving of the problem size at each step it's all the way up to O(n6)! On the other hand if the problems shrink only a little bit faster, say 1/65th of the size, immediately the complexity stops being loglinear again but this time in the other direction, becoming O(n).
So it could go either way, depending on how quickly the sub-problems shrink, which is not explicitly mentioned in your problem statement. Hopefully it is clear that merely comparing the "additional processing per step" is not sufficient, not in general anyway. A lot of processing per step is a disadvantage that cannot be overcome, but having only a little processing per step is an advantage that can be easily lost if the "shrinkage factor" is small compared to the "fan-out factor".
Upvotes: 1