Reputation: 541
I am really confused what big O,big theta and big omega represent: best case, worst case and average case or upper bound and lower bound.
If the answer is upper bound and lower bound, then whose upper bound and lower bound? For example let's consider an algorithm. Then does it have three different expressions or rate of growths for best case, lower case and average case and for every case can we find it's big O, theta and omega.
Lastly, we know merge sort via divide and conquer algorithm has a rate of growth or time complexity of n*log(n), then is it the rate of growth of best case or worst case and how do we relate big O, theta and omega to this. please can you explain via a hypothetical expression.
Upvotes: 1
Views: 1953
Reputation: 11
Assume f, g to be asymptotically non-negative functions.
In brief,
1) Big_O Notation
f(n) = O(g(n)) if asymptotically, f(n) ≤ c · g(n), for some constant c.
2) Theta Notation
f(n) = Θ(g(n)) if asymptotically, c1 · g(n) ≤ f(n) ≤ c2 · g(n),
for some constants c1, c2; that is, up to constant factors, f(n) and g(n) are asymptotically similar.
3) Omega Notation
f(n) = Ω(g(n)) if asymptotically, f(n) ≥ c · g(n), for some constant c.
(Informally, asymptotically and up to constant factors, f is at least g).
4) Small o Notation
f(n) = o(g(n)) if limn→∞ f(n)/g(n) = 0.
That is, for every c > 0, asymptotically, f(n) < c · g(n), (i.e., f is an order lower than g).
For the last part of your Question, Merge-Sort is Θ(nlog(n)) that is both worst and best case asymptotically converge to c1*nlog(n) + c2 for some constants c1, c2.
Upvotes: 0
Reputation: 15017
The notations are all about asymptotic growing. If they explain the worst or the average case depends only on what you say it should express.
E.g. quicksort is a randomized algorithm for sorting. Lets say we use it deterministic and always choose the first element in the list as pivot. Then there exists an input of length n
(for all n
) such that the worst case is O(n²)
. But on random lists the average case is O(n log n)
.
So here I used the big O for average and worst case.
Basically this notation is for simplification. If you have an algorithm that does exactly 5n³-4n²-3logn
steps you can simply write O(n³)
and get rid of all the crap after n³
and also forget about the constants.
You can use big O to get rid of all monomials except for the one with the biggest exponent and all constant factors (constant means they don't grow, but 10100 is also constant)
At the end you get with O(f(n))
a set of functions, that all have the upper bound f(n)
(this means g(n)
is in O(f(n))
, if you can find a constant number c
such that g(n) ≤ c⋅f(n)
)
To make it a little easier:
I have explained that big O means an upper bound but not strict. so n³
is in O(n³)
, but also n²
.
So you can think about big O as a "lower equal".
The same way you can do with the others.
Little o is a strict lower: n²
is in o(n³)
, but n³
is not.
Big Omega is a "greater equal": n³
is in Ω(n³)
and also n⁴
.
The little omega is a strict "greater": n³
is not in ω(n³)
but n⁴
is.
And the big Theta is something like "equal" so n³ is in Θ(n³)
, but neither n²
nor n⁴
is.
I hope this helps a little.
Upvotes: 3
Reputation: 3497
So the idea is that O means "on average" , one means the best case , one the worst case. For example lets think of most sorting algorithms. Most of them sort in n time if the items are already in order. You just have to check they are in order. All of them have a worst case order where they have to do most work to order everything.
Upvotes: 0