user7318820
user7318820

Reputation:

Run-time of a function to find the (n / 2^k)th element of a list of size n?

I can imagine only two cases: as 2^k approaches n, (say, if 2^k == n), then the number of steps required is n / n, or 1, so it has a run-time of O(1).

However, if 2^k is small, (say, if 2^k == 1), then n steps are required, so it has a run-time of O(n).

In essence, if 2^k can be written in terms of n, then the number of steps required is an integer, so it has a run-time of O(1). If 2^k cannot be written in terms of n, it has a run-time of O(n).

Both of these cases are worst-case; for any unknown and possibly large n, these run-times still hold. Is there any situation in which 2^k can't be qualified as much smaller than n or close to n?

Imagine you were to write a (possibly very long) table containing the run-times for the function for values of k such that (n / 2^k) >= 1. As k starts at 0 and increases, the run-times are O(1), but as k starts at a large value and decreases, the run-times are O(n). Is there a theoretical point at which the run-time switches from O(1) to O(n)?

Or am I simply not grasping the "main idea" of Big-O analysis?

EDIT: A linear search is assumed

Upvotes: 1

Views: 46

Answers (2)

Tobi Alafin
Tobi Alafin

Reputation: 249

However, if 2^k is small, (say, if 2^k == 1), then n steps are required, so it has a run-time of O(n). (1)

In essence, if 2^k can be written in terms of n, then the number of steps required is an integer, so it has a run-time of O(1). If 2^k cannot be written in terms of n, it has a run-time of O(n).(2)

(1)'s conclusion is correct, but (2) is wrong.

First of all, you have 2 variables. n and k. k: 1 <= 2^k <= n

If n = 128, and k = 4 128/2^4 = 4

Number of steps is 4 not 1.

In general, if n is a power of 2 say 2^x, then number of steps f(n) = 2^x/2^k = 2^(x-k)`

now n = 2^x, Using basic logarithms, x = lg(n) {Where lg(y) is the log to base 2 of y'

f(n) = 2^{lg(n)-k} (1)

The run time is 2^{lg(n)-k} if n is not divisible by 2^k, then you are attempting to access a number at a floating point index which does not exist.

Assuming you want me to round the index to the nearest integer. Indexes are only integers after all.

Now 1 gives you the simple run time formula. But the worst case, is when k = 0. SInce Big-Oh uses worst case, we go with that.

f(n) = O(2^{lg(n)-0}) f(n) = O(2^{lg(n)}

This is the Anti-log of lg(n) which again from basic logarithm knowledge gives us n.

Therefore f(n) = O(n)

Q.E.D

Upvotes: 1

Teepeemm
Teepeemm

Reputation: 4508

You’re missing a main idea of big-O analysis. There’s always an assumption (usually unstated) that you’re looking at a limit as something goes to infinity. Usually, there’s only one variable, so it’s clear from context that that’s the variable going to infinity. (For example, n^2+23n∈O(n^2) (as n goes to infinity).)

In your case, you have two variables, n and k. So in order to talk about big-O, you need to quantify what is going to infinity (it looks to be n), and what the other variable is doing in the meantime. This does lead to your two possibilities: if k is constant as n goes to infinity, then the answer is O(n); but if k is such that n/2^k is constant (i.e., k=lg(n)-C), then the answer is O(1). But there are other possibilities: if n/2^k=√n (i.e., k=lg(n)/2), then the answer is O(√n).

So depending on how k grows with n, the answer is anything between O(1) and O(n).

Upvotes: 1

Related Questions