user9467747
user9467747

Reputation:

Understanding when to use theta for time complexity

I (believe) I understand the definitions of Big-O, Big-Ω and Big-Θ; in that Big-O is the asymptotic upper bound, Big-Ω is the asymptotic lower bound and Big-Θ is the asymptotic tight bound. However, I keep getting confused with the usage of Θ in certain situations, such as in an insertion sort:

enter image description here

From what I understand this says that the insertion sort will:

  1. Take at least linear time (it won't run any faster than linear time); according to Big-Ω.

  2. Take at most n^2 time (it won't take any longer than n^2); according to Big-O.

The confusion arises from my understanding of when to use Big-Θ. To do so, I was lead to believe that you can only use Big-Θ when the values of Big-O and Big-Ω are the same. If that's the case, why is insertion sort considered to be Θ(n^2) when the Ω and O values are different?

Upvotes: 1

Views: 627

Answers (1)

Zoomba
Zoomba

Reputation: 1806

Basically, you can only use Big-Θ when there is no asymptotic gap between the upper bound and the lower bound on the running time of the algorithm:

In your example, insertion-sort takes at most O(n^2) time (in the worst-case) and it takes Ω(n) time (in the best-case). So, O(n^2) is the time upper bound of the algorithm, and Ω(n) is the lower-bound on the algorithm. Since these two are not the same you cannot use Big-Θ to describe the running time of the insertion-sort algorithm.

However, consider Selection-Sort algorithm. Its worst-case running time is O(n^2), and its best-case running time is Ω(n^2). Therefore, since the upper bound and the lower bound are the same (asymptotically), you can say that the running time of the selection-sort algorithm is Θ(n^2).

Upvotes: 3

Related Questions