Reputation: 1508
I am studying the book Introduction to Algorithms, by Thomas H. Corman. I am studying the asymptotic notation. One thing is bothering me, because the author stated that:
f(n)=Big-theta(g(n)) implies f(n)=Big-O(g(n)) , since Big-theta notation is stronger notion than O-notation. HOW??
and the author also stated that (an^2+bn+c), where a>0, is in Big-theta(n^2) also shows that such quadratic function is in Big-O(n^2). HOW??
Upvotes: 1
Views: 347
Reputation: 7996
I think you are confused a bit with the terms.
f(n) = O(g(n))
- means that g(n)
is an upper bound of f(n)
. Formally - exist const n0, c
, such that for all n>n0, f(n)<= c*g(n)
. You can imagine it as two graphs, such that c*g(n)
is upper than f(n)
.
For example : 5n^2+n = O(n^2)
Why ?
Because if, for example, n0=10
and c=10
, then for all n>n0
- 5n^2+n <= 10*n^2
f(n) = Theta(g(n))
- means that g(n)
is an upper and a lower bound of f(n)
. Formally - exist const n0, c1, c2
, such that for all n>n0, c1*g(n)<=f(n)<=c2*g(n)
. You can imagine it as three graphs, such that f(n)
is between c1*g(n)
and c2*g(n)
.
For example : 5n^2+n = Theta(n^2)
Why ?
Because if, for example, n0=100
and c1=1,c2=100
then for all n>n0
- n^2<=5n^2+n<=100*n^2
Upvotes: 2
Reputation: 19621
(In V1 of the book) the definition of f() being in Theta(g(n)) is that there are positive constants C1 and C2 such that 0 <= C1g(n) <= f(n) <= C2g(n) for all n >= N0
The definition of O(g(n)) is that there is a single C such that 0 <= f(n) <= Cg(n) for all n >= N0
So if you can find big enough constants N0, C1 and C2 to satisfy the first definition, you can use constants N0 and C = C2 to satisfy the second definition. Therefore the first definition is stronger than the second in the sense that anything that satisfies the first satisfies the second - and the business about the quadratic is a special case of this.
Upvotes: 0