Spencer
Spencer

Reputation: 79

Is any function Big-O itself?

Whilst working through a basic course in asymptotic notation I came to a set of problems in which I was supposed to a find a function (g(n)) such that a given function f(n) = O(g(n)). After working through these problems a bit I came to wonder; aren't all functions big-O of themselves? They will be eventually bound by some c * f(n) give f(n) is the original function. I have been trying to prove this incorrect in Desmos to no avail.

Am I fundamentally misunderstanding big-O notation? Is the purpose more to prove that some algorithms have definitely smaller run-time then others rather than bound them with an arbitrary function? Any clarification is much appreciated. Thank you.

Upvotes: 3

Views: 2552

Answers (4)

Sarthak Kumar
Sarthak Kumar

Reputation: 13

The answers given above are great, but here's an addition that would make this question page complete. I'm assuming that you've read the link in the answer above, and know about Big-O, Theta and Omega notation.

I guess what you're asking is: is there an f(n) such that f(n) = O(f(n))?

Maybe. But there are many cases where where f(n) = Θ(f(n)). This happens when f(n) = T(f(n)) i.e. a function that inputs an integer and outputs an integer is defined the same way as the time function.

This is especially true of the Fibonacci series function. I started typing out a proof, but will defer in favour of this answer: https://stackoverflow.com/a/360773/11252937

In case the link goes down, here's an excerpt:

You model the time function to calculate Fib(n) as sum of time to calculate Fib(n-1) plus the time to calculate Fib(n-2) plus the time to add them together (O(1)). This is assuming that repeated evaluations of the same Fib(n) take the same time - i.e. no memoization is use.

T(n<=1) = O(1)

T(n) = T(n-1) + T(n-2) + O(1)

You solve this recurrence relation (using generating functions, for instance) and you'll end up with the answer.

Alternatively, you can draw the recursion tree, which will have depth n and intuitively figure out that this function is asymptotically O(2^n). You can then prove your conjecture by induction.

Base: n = 1 is obvious

Assume T(n-1) = O(2^(n-1)), therefore

T(n) = T(n-1) + T(n-2) + O(1) which is equal to

T(n) = O(2^(n-1)) + O(2^(n-2)) + O(1) = O(2^n)

However, as noted in a comment, this is not the tight bound. An interesting fact about this function is that the T(n) is asymptotically the same as the value of Fib(n) since both are defined as

f(n) = f(n-1) + f(n-2).

The leaves of the recursion tree will always return 1. The value of Fib(n) is sum of all values returned by the leaves in the recursion tree which is equal to the count of leaves. Since each leaf will take O(1) to compute, T(n) is equal to Fib(n) x O(1). Consequently, the tight bound for this function is the Fibonacci sequence itself (~θ(1.6^n)). You can find out this tight bound by using generating functions as I'd mentioned above.

Upvotes: 1

Kevin He
Kevin He

Reputation: 1250

You can find some resources on all the notations around the area here.

Theoretically yes, any function is a big-O of itself. It's mathematically a tautology. But from my understanding, big-O is usually used to convert complex run time vs input size(s) relationship into a simple and elegant estimate of the asymptotic behavior for large input sizes. Usually we only keep the most significant terms and omit the others and the constants. Since for large n, only the most significant terms stands out.

E.g. you have two solutions to one problem, one costs T1(n) = n^2 + 100*n + 30*log(n), the other costs T2(n) = 10000n + 40*sqrt(n). Using the big-O notation, we know that T1(n) is O(n^2) and T2(n) is O(n), so for large inputs, method 2 might be more desirable since it grows asymptotically linearly with n.

Upvotes: 3

kzsnyk
kzsnyk

Reputation: 2211

Am I fundamentally misunderstanding big-O notation?

I am afraid you are because the question "Is any function Big-O itself?" sounds weird to me.

Is the purpose more to prove that some algorithms have definitely smaller run-time then others rather than bound them with an arbitrary function?

Absolutely.

Recall that Big-O is a mathematical notation. It is widely used to simplify the writing of a formula. For example a Taylor expansion serie of a function f(x) where we don't want to write the exact formula of the approximation error but still want to know - by reading the formula - that this error behave like a polynomial function when x tends to infinite or toward a specific value.

But most of the time, we are generally interested in the cost in time and in space of a function ( in the programming sense ) and we want to compare the efficiency of 2 algorithms when the dataset is big.

This is where the Big-O notation comes in handy : we know that we can compare the cost of algorithm A and B by comparing their Big-O counterparts - that is - their costs complexity expressed in terms of a polynomial, constant or logarithmic function ( or whatever is easier to compare ) of a variable n, where n is the size of the dataset ( the size of an array, number of items in a set...)

Upvotes: 0

OmG
OmG

Reputation: 18838

You can prove it by the definition of the big-O. As lim f(n)/f(n) when n goes to \infty is equal to 1 and constant, we can say f(n) = \Theta(f(n)), and it means f(n) = O(f(n)) and f(n) = \Omega(f(n)).

Upvotes: 3

Related Questions