chrisd1120
chrisd1120

Reputation: 25

What is the runtime of this recursive code for computing exponents?

Would the running time complexity for the following function be O(1)?

int pow(int a, int n) {
    if (n == 0) {
        return 1;
    }
    if (n % 2 == 1) {
        return pow(a, n / 2) * pow(a, n / 2) * a;
    } else {
        return pow(a, n / 2) * pow(a, n / 2);
    }
}

I was under that impression because there are only if statements in the code, no loops. I've never worked with Big-O and recursion before and I can't find any good resources online.

Upvotes: 2

Views: 361

Answers (3)

Dannyu NDos
Dannyu NDos

Reputation: 2508

No, because it involves recursion and branching. Its time complexity is O(n), and its space complexity is O(log n).

You'll get O(log n) time complexity with:

int pow (int a, int n) {
    if(n == 0) { return 1; }
    int halfpow = pow(a,n/2);
    return halfpow * halfpow * (n % 2 == 1 ? a : 1);
}

Upvotes: 1

user2836202
user2836202

Reputation: 639

Let's break down what is happening here. It is in fact O(n). For each call to pow() there are two options that can happen:

  1. Return 0 -- which is a single operation.
  2. Perform two calls to pow() with n/2.

So, while you reduce your n by n/2 on each call to pow(), you increase your problem space exponentially on each step. First step of recursion is two calls to pow, second is 2^2, third is 2^3, etc. For n = 16 will have pow(..., 16), pow(..., 8), pow(..., 4), pow(..., 2), pow(..., 1), pow(..., 0) with 2^0 calls to pow for n = 16, 2^1 for n = 8, 2^2 for n = 4, 2^3 for n = 2, 2^4 for n = 1, 2^5 for n = 0.

So, we call pow log(n) times but each iteration of pow calls double the number of calls in the previous step. This means we have O(n)

Upvotes: 2

templatetypedef
templatetypedef

Reputation: 373002

The runtime of your function is O(n), but it can easily be modified to run in time O(log n).

There are many ways we can see this. First, we could count up the total number of recursive calls being made, since each recursive call does O(1) work. Imagine, for example, we call pow(a, 8) for some number a. Then

  • pow(a, 8) calls pow(a, 4) twice.
  • pow(a, 4) calls pow(a, 2) twice.
  • pow(a, 2) calls pow(a, 1) twice.
  • pow(a, 1) calls pow(a, 0) twice.

This means that there is

  • 1 call to pow(a, 8),
  • 2 calls to pow(a, 4),
  • 4 calls to pow(a, 2),
  • 8 calls to pow(a, 1), and
  • 16 calls to pow(a, 0).

Overall, this is 1 + 2 + 4 + 8 + 16 = 31 total calls made.

Now, suppose we call pow(a, 16). That will fire off two calls to pow(a, 8) (62 total recursive calls made), plus one for the initial call to pow(a, 16) for a total of 63 recursive calls. If we called pow(a, 32), we'd make two calls to pow(a, 16) (126 total recursive calls), plus one for the initial call to pow(a, 32) for a total of 127 recursive calls. More generally, it seems like if we call pow(a, n), we'd get 4n - 1 calls made, which would be O(n).

We can actually formally prove this. Let C(n) be the number of calls made on an input of size n. Notice that

C(0) = 1. C(n) = 2C(n / 2) + 1

This recurrence solves, via the Master Theorem, to O(n).

Notice that each individual recursive call is doing very little work. What kills us is the fact that there are so many total recursive calls being made that the work adds up across those calls. But while there are a lot of total recursive calls being made, there are very few unique recursive calls being made. So consider this variation on the code:

int pow(int a, int n) {
    if (n == 0) return 1;

    int halfPow = pow(a, n / 2);
    if (n % 2 == 0) return halfPow * halfPow;
    else return halfPow * halfPow * a;
}

This code caches the value of the recursive call that's being made, so it always fires off a single call each time. As a result, the work done per call is still O(1), but there's no more branching in the recursion. Then, since each recursive call has a size that's half that of the original, and because there's only one call per level, the runtime works out to O(log n), which you can confirm using the Master Theorem.

In general, be wary of arguments of the form "we keep cutting things in half, so the overall work ends up being O(log n)." That may be true, but the amount of work you do at each step is also very, very important for determining the runtime, as you can see here.

Upvotes: 5

Related Questions