freshmaster
freshmaster

Reputation: 333

Solving for Ω, and Θ (O, Omega and Theta notations)

I have solved a recurrence relation that has a running time of Θ(2^n), exponential time. How do I find Ω, and O for the same recurrence relation.

I guess if it is Θ(2^n), it should also be O(2^n), am I right? How do I find the Ω, the lower bound?

I tried solving the recurrence relation:

T(n) = 2T(n-1) + C.

Upvotes: 2

Views: 5858

Answers (2)

Shahbaz
Shahbaz

Reputation: 47533

If a function is Θ(f(n)) it implies that it is both O(f(n)) and Ω(f(n))

The inverse is also true, if a function is both O(f(n)) and Ω(f(n)), then it is Θ(f(n))

The proof for this is simple:

g(n)=Θ(f(n)) => exists n0, c1 and c2 such that: c1f(n)<g(n)<c2f(n) for n>n0

Take only the first half:

exists n0 and c1 such that: c1f(n)<g(n) for n>n0 => g(n)=Ω(f(n))

And the second half:

exists n0 and c2 such that: g(n)<c2f(n) for n>n0 => g(n)=O(f(n))

The proof of inverse is similar.

Edit: A little light on what Θ, O and Ω mean.

You must have already seen the definitions of Θ, O and Ω (if not, I just mentioned all three of them in the proof above).

So besides their mathematical definition, what do they mean?

Basically, think of them like this:

  • g(n) = O(f(n)) This means that when n gets large, f(n) always has a bigger value than g(n). well to be more precise, not f(n) itself, but a constant multiple of it. For example, n+100 is O(n^2), because for n>10, 1*n^2>n+100. Also, for n>3, 11*n^2>n+100. The thing with all these notations is that, the constant doesn't play an important role as it is the f(n) which determines how the function grows. Note that, the O notation shows an upper bound of the function and therefore is not unique. For example if f(n)=O(n), then it is also O(n^2) and O(nlogn) etc, but (maybe) not O(sqrt(n))
  • g(n) = Ω(f(n)) This is exactly the inverse of O. Therefore, it shows that f(n) is a lower bound for g(n) (again multiplied by a constant factor) and again, it's not unique. For example if f(n)=Ω(n), then is also Ω(1) and Ω(1/n). You always have

    g(n) = Ω(f(n)) <=> f(n) = O(g(n))
    
  • g(n) = Θ((f(n)) This is a tight bound on the growth of your function. It basically means g(n) has the same growth as f(n), although it may not be as smooth as f(n). This not smooth as f(n) is the beauty of what Θ does: You have an algorithm with execution time that is not simply expressed, but with Θ you can analyze it. g(n) = Θ((f(n)) is something like this picture:

    |                     ---- 2*f(n)
    |                    /    /\  ---(g(n)
    |                ----    /  \/    -------- f(n)
    |               /       /        /
    |           ----   /\  / --------
    |          /  -----  \/ /
    |      ----  /  --------
    |     /     /  /
    | ---- --------
    |/    /
    +---------------------------------------------
    

Fun facts:

  • f(n) = O(f(n)) because you have for all n, 2*f(n)>f(n) (2 is an example, any constant bigger than 1 is good). The smallest upper bound of a function, is the function itself!
  • Similarly, f(n) = Ω(f(n)) and f(n) = Θ(f(n)). Also, the biggest lower bound of a function is the function itself.
  • Θ shows the exact growth of a function and if you find it for your algorithm, then it's the best description. However, many algorithms have complex behaviors or at the very best have different growths based on their input (for example insertion sort is Θ(n) given a sorted input and Θ(n^2) given an inversely sorted input) Therefore, finding the Θ of an algorithm is not always possible.
  • O shows an upper bound of the execution of the function. Usually this is what people compute for their algorithms. By doing that, they are saying, no matter what, my algorithm is not worse than this, but in some cases it could be better. For example, insertion sort is O(n^2).
  • Sometimes O doesn't show the best (minimum) upper bound because finding that is simply too difficult. In those cases, O still comes to the rescue. For example if you have an algorithm that works with inputs of size 1000 in a UI application (that a delay of say, 0.5s is fine), then if your algorithm is O(n^2) then it's good, even though the worst case execution of the algorithm is in fact lower than that (but it was too hard for you to find it). The best upper bound is the growth rate of the worst case execution of the algorithm. In the example of the insertion sort, the worst case execution is Θ(n^2) and thus, the best upper bound you can give to the algorithm is O(n^2) (as opposed to O(n^3) for example).
  • Ω is not so useful in practice, you never want to say my algorithm is at best like this, but could be worse (That's bad advertising)! However, in computer science it is very useful. Most of the times, to prove that something cannot be done in a better way. For example, if there is an algorithm solving a problem in Θ(n^3), and you prove that whatever algorithm solving the problem must be Ω(n^3), then it means that there will never be a better algorithm than the one you already have (so you kinda tell the others don't look for it, you won't find it)

There is a lot of O-notation mathematics out there that are not hard to understand or to invent yourself. Some examples are:

  • O(f(n)+g(n)) = O(max(f(n),g(n)))
  • O(f(n))+O(g(n)) = O(f(n)+g(n))
  • O(f(n))O(g(n)) = O(f(n)g(n))
  • O(cf(n)) = O(f(n)) where c is a constant

You can prove most of these immediately by putting them in the definition of the O notation.

The first of those rules is perhaps the most useful. If your algorithm has two sections, one setting some array of size n to zero, then doing something of O(nlogn) then the overall order is O(n+nlogn) which is simply O(nlogn).

This implies that mathematically speaking, it is better to have a thousand O(nlogn) preprocessing and an algorithm of O(nlogn) solving a problem than to have a concise algorithm of O(n^1.5). Note that in practice though, it is not necessarily better. Why you ask? The first one is O(nlogn) and the second is O(n^1.5) so the first is better you say? Well remember that the O notation shows the behavior of the function asymptotically. Which means, yes if your output gets very very very large, the first algorithm (the one with a thousand preprocesses) is better, but in the practical range of your n, 1000nlogn may be much larger than n^1.5.

Upvotes: 5

malejpavouk
malejpavouk

Reputation: 4445

If it's homework, you could try to draw it as a recursion tree, where nodes represent the complexity of operations required by the function calls.

EDIT: About the lower bound, The Omega is defined as a lower bound. If you have Theta (the exact behavior), you have also Omicron and Omega... they are just less precise.

So

Theta(n) <=> O(n) AND Omega(n)

SPOILER

If I remember correctly, this is how you interpret it...

you have a tree, at its root you have only C (the work to marge the solutions), and you have to spit in two branches (again with C as work), the nodes get branched n times

     C
    /|
   C C
  /| |\
 C C C C
/| ......

Complete solution

because the tree has a depth of n, at the bottom you have 2^n nodes all with complexity of C, then you have n-1 levels with the complexity C, 2C, 4C....(2(n-1)*C), they should sum up to 2^n*c

So the final complexity should be 2*(2^n)*C which is theta(2^n)

Upvotes: 4

Related Questions