pius
pius

Reputation: 119

How do you calculate combined orders of growth?

Suppose I have a recursive procedure with a formal parameter p. This procedure

  1. wraps the recursive call in a Θ(1) (deferred) operation
  2. and executes a Θ(g(k)) operation before that call.

k is dependent upon the value of p. [1]

The procedure calls itself with the argument p/b where b is a constant (assume it terminates at some point in the range between 1 and 0).

Question 1. If n is the value of the argument to p in the initial call to the procedure, what are the orders of growth of the space and the number of steps executed, in terms of n, for the process this procedure generates


Footnotes

[1] i.e., upon the value of the argument passed into p.
[2] i.e., the size of the input to the nested operation is same as that for our procedure.
[3] i.e., the size of the input to the nested operation is some function of the input size of our procedure.


Sample procedure

(define (* a b)
  (cond ((= b 0) 0)
        ((even? b) (double (* a (halve b))))
        (else (+ a (* a (- b 1))))))

This procedure performs integer multiplication as repeated additions based on the rules

Pseudo-code:

define *(a, b) as
{
    if (b is 0) return 0
    if (b is even) return double of *(a, halve (b))
    else return a + *(a, b - 1)
}

Here

Question 2. What will be the orders of growth, in terms of n, when *(a, n) is evaluated?


Before You Answer

Please note that the primary questions are the two parts of question 1.

Question 2 can be answered as the first part. For the second part, you can assume f(p) to be any function you like: log p, p/2, p^2 etc.

Upvotes: 1

Views: 90

Answers (2)

user8027470
user8027470

Reputation:

I saw someone has already answered question 2, so I'll answer question 1 only.

First thing is to notice is that the two parts of the question are equivalent. In the first question, k=p so we execute a Θ(g(p)) operation for some function g. In the second one, k=f(p) and we execute a Θ(g(f(p))) = Θ((g∘f)(p)). replace g from the first question by g∘f and the second question is solved.

Thus, let's consider the first case only, i.e. k=p. Denote the time complexity of the recursive procedure by T(n) and we have that:
T(n) = T(n/b) + g(n) [The free term should be multiplied by a constant c, but we can talk about complexity in "amount of c's" and the theta bound will obviously remain the same]

The solution of the recursive formula is T(n) = g(n) + g(n/b) + ... + g(n/b^i) + ... + g(1)
We cannot further simplify it unless given additional information about g. For example, if g is a polynomial, g(n) = n^k, we get that
T(n) = n^k * (1 + b^-k + b^-2k + b^-4k + ... + b^-log(n)*k) <= n^k * (1 + b^-1 + b^-2 + ....) <= n^k * c for a constant c, thus T(n) = Θ(n^k).

But, if g(n) = log_b(n), [from now on I ommit the base of the log] we get that T(n) = log(n) + log(n/b) + ... + log(n/(b^log_b(n))) = log(n^log(n) * 1/b^(1 + 2 + ... log(n))) = log(n)^2 - log(n)^2 / 2 - log(n) / 2 = Θ(log(n) ^ 2) = Θ(g(n)^2).

You can easily prove, using a similar proof to the one where g is a polynomial that when g = Ω(n), i.e., at least linear, then the complexity is g(n). But when g is sublinear the complexity may be well bigger than g(n), as g(n/b) may be much bigger then g(n) / b.

Upvotes: 1

PF. Castro
PF. Castro

Reputation: 144

You need to apply the wort case analysis.

First, you can approximate the solution by using powers of two:

If b=2^n then clearly the algorithm takes: \Theta(n* g(k)) (where n=log b).

If it is an odd number then after applying -1 you get an even number and you divide by 2, you can repeat this only log b times, and the number of steps is also \Theta(log b * g(k)), the case of b being an odd number is clearly the worst case and this gives you the answer.

(I think you need an additional base case for: b=1)

Upvotes: 0

Related Questions