Patrick88
Patrick88

Reputation: 77

Asymptotic runtime for an algorithm

I've decided to try and do a problem about analyzing the worst possible runtime of an algorithm and to gain some practice. Since I'm a beginner I only need help in expressing my answer in a right way. I came accros this problem in a book that uses the following algorithm:

Input: A set of n points (x1, y1), . . . , (xn, yn) with n ≥ 2. Output: The squared distance of a closest pair of points. ClosePoints

1. if n = 2 then return (x1 − x2)^2 + (y1 − y2)^2
2. else
3. d ← 0
4. for i ← 1 to n − 1 do
5.   for j ← i + 1 to n do
6.     t ← (xi − xj)^2 + (yi − yj)^2
7.     if t < d then
8.       d ← t
9. return d

My question is how can I offer a good proof that T(n) = O(n^2),T(n) = Ω(n^2) and T (n) = Θ(n^2)?,where T(n) represents the worst possible runtime. I know that we say that f is O(g), if and only if there is an n0 ∈ N and c > 0 in R such that for all n ≥ n0 we have f(n) ≤ cg(n). And also we say that f is Ω(g) if there is an n0 ∈ N and c > 0 in R such that for all n ≥ n0 we have f(n) ≥ cg(n).

Now I know that the algoritm is doing c * n(n - 1) iterations, yielding T(n)=c*n^2 - c*n. The first 3 lines are executed O(1) times line 4 loops for n - 1 iterations which is O(n) . Line 5 loops for n - i iterations which is also O(n) .Does each line of the inner loop's content (lines 6-7) takes (n-1)(n-i) or just O(1)?and why?The only variation is how many times 8.(d ← t) is performed but it must be lower than or equal to O(n^2).

So,how should I write a good and complete proof that T(n) = O(n^2),T(n) = Ω(n^2) and T (n) = Θ(n^2)? Thanks in advance

Upvotes: 0

Views: 2413

Answers (3)

Dhruv Gairola
Dhruv Gairola

Reputation: 9182

if you observe the two for loops, each for loop gives an O(n) because each loop is incrementing/decrementing in a linear fashion. hence, two loops combined roughly give a O(n^2) complexity. the whole point of big-oh is to find the dominating term- coeffecients do not matter. i would strongly recommend formatting your pseudocode in a proper manner in which it is not ambiguous. in any case, the if and else loops do no affect the complexity of the algorithm.

lets observe the various definitions:

Big-Oh

• f(n) is O(g(n)) if f(n) is asymptotically “less than or equal” to g(n)

Big-Omega

• f(n) is Ω(g(n)) if f(n) is asymptotically “greater than or equal” to g(n)

Big-Theta

• f(n) is Θ(g(n)) if f(n) is asymptotically “equal” to g(n)

so all you need are to find constraints which satisfy the answer.

Upvotes: 0

IVlad
IVlad

Reputation: 43477

Count the number of times t changes its value. Since changing t is the innermost operation performed, finding how many times that happens will allow you to find the complexity of the entire algorithm.

i = 1 => j runs n - 1 times (t changes value n - 1 times)
i = 2 => j runs n - 2 times
...
i = n - 1 => j runs 1 time

So the number of times t changes is 1 + 2 + ... + n - 1. This sum is equal n(n - 1) / 2. This is dominated by 0.5 * n^2.

Now just find appropriate constants and you can prove that this is Ω(n^2), O(n^2), Θ(n^2).

Upvotes: 2

rmmh
rmmh

Reputation: 7095

T(n)=c*n^2 - c*n approaches c*n^2 for large n, which is the definition of O(n^2).

Upvotes: 2

Related Questions