aky
aky

Reputation: 83

Prove that different definitions of big-Oh with n>=1 or n>N are equivalent

I am coming across two slightly different definitions of big-oh and need to prove that they are equivalent to each other:

Definition 1: f(n) = O(g(n)) if there exists constants c and N such that f(n) ≤ c g(n) for all n > N.

Definition 2: f(n) = O(g(n)) if there exists a constant c such f(n) ≤ c g(n) for all n≥1.

Intuitively I know that if we choose c large enough we can get rid of N like in definition 2. But how to prove that if definition 1 implies definition 2, and vice versa.

Upvotes: 0

Views: 170

Answers (2)

Leandro Caniglia
Leandro Caniglia

Reputation: 14858

Let me explain better my comment to @btilly's answer.

When g(n)>0 for all values of n, both definitions are in fact equivalent. Here is why:

Firstly, it is always true that when Definition 2 holds, Definition 1 also holds. In fact, we can choose N=0 in this case.

Assume now that Definition 1 is satisfied for some constant c and some number N. If N=0, we have Definition 2. If N>0, consider the following quantity:

c1 := max{f(1)/g(1), ..., f(N)/g(N)}

the quotients make sense because we are in the case where g(n) is always positive. Besides, since

f(n)/g(n) <= c1        (1<=n<=N)

we get

f(n) <= c1*g(n)        (1<=n<=N)

and since f(n) <= c*g(n) for n>N, it happens that

f(n) <= max(c1,c)*g(n)     for all n

as Definition 2 requires.

Upvotes: 1

btilly
btilly

Reputation: 46408

They aren't actually equivalent, and (1) is the correct definition.

An example of the difference is that under (1), n = O(n log(n)) but under definition (2) it can't be because at n=1 because for any c, c g(n) = c*1*log(1) = 0 < 1.

The reason why (1) is the correct definition is because the purpose of big-O is to capture behavior "near infinity", and so a finite number of special cases for small n should be ignored.

The reason why you'll see (2) show up is because it is sufficient to establish big-O. It just isn't necessary.

Upvotes: 3

Related Questions