Reputation: 17
I have a slight idea but I am not sure about it. Big O(n) is Big O(n^2)? And if it is, O(n log 2 n) is O(n^2)?
Upvotes: 0
Views: 4407
Reputation: 1680
O(n) is O(n^2) can be interpreted as "The second function is always greater (or equal to) the first function for all values greater than n_0, where n_0 >= 0.
More specifically O(n) is O(n^2) means there exists some constant (call it c) and some value for n (call it n_0) such that if n is greater than or equal to n_0 then the second function will always be greater than (or equal to) the first function.
You might also encounter this notation O(n) = O(n^2), which means the same thing as O(n) is O(n^2).
For example, if I have a for loop in java that iterates twice through the same list of size n it would be O(n). Why? It would visit each element twice, meaning 2*n total visits would be made therefore 2*n is O(n) because of 2*n <= 4*n for all n >= 0 (where 4 is our constant c).
I picked 4 arbitrarily, it doesn't matter what constant c it is, as long as it ensures that the second function is always larger than the first as n increases beyond n_0. In computer science, we call this a 'witness'. So I could have picked 3 or 100, or even 2 for c and it would still work. As long as 2*n <= c*n for all n >= n_0 (where n_0 is some constant also) then 2*n is O(n).
The graph below shows what I mean.
f(n) = O(g(n)) means our function f(n) is always less than the function g(n) for some constant c. This is a more arbitrary definition of big O notation.
In your case f(n) = O(n) and O(g(n)) = O(n^2).
Slide source: Prof. Andy Mirzaian, York University EECS 3101.
Why do we use this notation? It helps us compare the speeds of our algorithms to help make better choices. Certain kinds of algorithms can only be run on very small inputs because they are too slow. For example, if your algorithm f(n) is O(n^30) and your input size is a million elements, we know before we even run our code this piece of code is going to slow down our program big time because 1,000,000^30 is a massive number. In contrast, if the algorithm f(n) is O(log(n)) then a million elements will be computed very quickly.
Upvotes: 1
Reputation: 2139
In simple layman term, Big-Oh(O) notation is used to set the upper bound of how much time, a piece of code needs to execute.
So consider this example,
For finding an element from a list of
n
elements. The simple case would be sequential search and that needsn
comparisons. So we can say, the sequential search isO(n)
. When we say it is of O(n) complexity, what it means is that code takes a maximum ofn
comparisons to finish its execution. That is the maximum sequential search needs in less thann
comparisons. And it is also true(but not precise) if we say it is less thatn^2
comparisons. SoO(n)
can be replaced byO(n^2)
. But we usually prefer to use the lowest possible upper cap when mentioning Big-Oh notation.
In Big-Oh any notation can be replaced by an expression which is worse in performance as Big-Oh defines an upper cap on the execution time.
So if a code snippet has a complexity of O(n)
, then it is also O(n^2)
, O(n^3)
, O(n!)
, O(2^n)
etc.
Performance wise best to worst,(left one is best and right most one the worst)
O(1) > O(logn) > O(n) > O(nlogn) > O(n^2) > O(2^n) > O(n!)
Coming back to Big-Oh notation, here is the list of best to worst.
Hope this helps! :)
Upvotes: 0