Raisa A
Raisa A

Reputation: 486

Explanation for asymptotic notation graph

I am currently going over an algorithms course and I have come across the following graphenter image description here

I am having a hard time understanding what f(n) stands for, what g(n) means and what c and n0 mean. Furthermore, I want to know how they are related to the concept of bounds and what that means.

Most tutorials just start by explaining how the above terms are related, but not what they mean

Upvotes: 2

Views: 1087

Answers (1)

Stef
Stef

Reputation: 15525

Imagine you wrote a program to sort a list. Your program inputs a list, performs computations for some time, then outputs a sorted list.

The "big-O" notation f(n) = O(g(n)) is a way to compare two functions f and g.

When dealing with numbers, you are used to making comparisons, for instant "x < y".

When dealing with functions, we have several different ways of making comparisons. For instance, you could say "function f is always smaller than function g" and you would write that formally: "forall n, f(n) < g(n)".

However, this comparison is a bit extreme. On the graph you have shown in your question, the purple function is not always smaller than the blue function. But you can notice that it is only larger for a few small values of n. So it makes sense to say "after a few initial values, function f finally becomes smaller than function cg" or formally "there exists a number n0 such that forall n > n0, f(n) < cg(n)".

Now, the functions we wanted to compare originally were f and g, not f and cg. However, maybe we don't care about a multiplicative constant; maybe we only care how f(n) behaves when n increases, and how g(n) behaves when n increases. For instance, maybe you notice that when n is 10 times bigger, f(n) becomes roughly 100 times bigger. This means that f(n) is roughly proportional to n^2. It doesn't matter to us whether f is about equal to n^2 + 5 or to 7 * n^2 + 2 * n + 12. What matters to us is that it's roughly proportional to n^2.

So Big-O notation gives us a way to compare two functions f(n) and g(n) while ignoring multiplicative constants, and ignoring values of f(n) and g(n) for small values of n.

f(n) = O(g(n)) litterally means "there exists a value n0 and a multiplicative constant c such that as long as n is bigger than n0, f(n) will be smaller than c g(n).

This definition is relevant to complexity analysis of algorithms. Imagine you have written an algorithm to sort a list. Let's call f(n) the maximum number of operations your algorithm needs to sort a list of n items. You want to prove that your algorithm is efficient. You want to make statements such as "f(n) = O(n^2)", which would mean roughly "if the size of the list is multiplied by 10, then the time of execution will be multiplied by less than 100". This doesn't give the exact number of operations - maybe f(n) = 4 n^2, or maybe f(n) = (n^2 - n) / 2. But who cares about the exact number. What's important is that if the list is 10 times longer, the time of execution will be at most 100 times longer.

Upvotes: 4

Related Questions