user12394388
user12394388

Reputation:

Big Oh! algorithms running in O(4^N)

For algorithms running in

O(16^N)

If we triple the size, the time is multiplied by what number??

Upvotes: 0

Views: 103

Answers (2)

templatetypedef
templatetypedef

Reputation: 373082

This is an interesting question because while equivalent questions for runtimes like Θ(n) or Θ(n3) have clean answers, the answer here is a bit more nuanced.

Let's start with a simpler question. We have an algorithm whose runtime is Θ(n2), and on a "sufficiently large" input the runtime is T seconds. What should we expect the runtime to be once we triple the size of the input? To answer this, let's imagine, just for simplicity's sake, that the actual runtime of this function is closely approximated by cn2, and let's have k be the "sufficiently large" input we plugged into it. Then, plugging in 3k, we see that the runtime is

c(3k)2 = 9ck2 = 9(ck2) = 9T.

That last step follows because the cost of running the algorithm on an input of size k is T, meaning that ck2 = T.

Something important to notice here - tripling the size of the input does not change the fact that the runtime here is Θ(n2). The runtime is still quadratic; we're just changing how big the input is.

More generally, for any algorithm whose runtime is Θ(nm) for some fixed constant m, the runtime will grow by roughly a factor of 3m if you triple the size of the input. That's because

c(3k)m = 3mckm = 3mT.

But something interesting happens if we try performing this same analysis on a function whose runtime is Θ(4n). Let's imagine that we ran this algorithm on some input k and it took T time units to finish. Then running this algorithm on an input of size 3k will take time roughly

c43k = c4k42k = T42k = 16kT.

Notice how we aren't left with a constant multiple of the original cost, but rather something that's 16k times bigger. In particular, that means that the amount by which the algorithm slows down will depend on how big the input is. For example, the slowdown going from input size 10 to input size 30 is a factor of 1620, while the slowdown going from input size 30 to input size 90 is a staggering 1660. For what it's worth, 1660 = 2240, which is pretty close to the number of atoms in the observable universe.

And, intuitively, that makes sense. Exponential functions grow at a rate proportional to how big they already are. That means that the scale in runtime for doubling or tripling the size of the input will lead to a runtime change that depends on the size of that input.

And, as above, notice that the runtime is not now Θ(43n). The runtime is still Θ(4n); we're just changing which inputs we're plugging in.

So, to summarize:

  • The runtime of the function slows down by a factor of 42n if you triple the size of the input n. This means that the slowdown depends on how big the input is.
  • The runtime of the function stays at Θ(4n) when we do this. All that's changing is where we're evaluating the 4n.

Hope this helps!

Upvotes: 1

BishalG
BishalG

Reputation: 1434

The time complexity of the algorithm represents the growth in run-time of the algorithm with respect to the growth in input size. So, if our input size increases by 3 times, that means we have now new value for our input size.

Hence, time complexity of the algorithm still remains same. i.e O(4^N)

Upvotes: 0

Related Questions