J. Woodring
J. Woodring

Reputation: 111

Explain Time Complexity?

How does one find the time complexity of a given algorithm notated both in N and Big-O? For example,

    //One iteration of the parameter - n is the basic variable
void setUpperTriangular (int intMatrix[0,…,n-1][0,…,n-1]) {
         for (int i=1; i<n; i++) { //Time Complexity {1 + (n+1) + n} = {2n + 2}
             for (int j=0; j<i; j++) {  //Time Complexity {1 + (n+1) + n} = {2n + 2}
                 intMatrix[i][j] = 0; //Time complexity {n}
             } 
         } //Combining both, it would be {2n + 2} * {2n + 2} = 4n^2 + 4n + 4 TC
     } //O(n^2)

Is the Time Complexity for this O(n^2) and 4n^2 + 4n + 4? If not, how did you get to your answer?

Also, I have a question about a two-param matrix with time complexity.

//Two iterations in the parameter, n^2 is the basic variable
void division (double dividend [0,…,n-1], double divisor [0,…,n-1]) 
 { 
     for (int i=0; i<n; i++) { //TC {1 + (n^2 + 1) + n^2} = {2n^2 + 2}
         if (divisor[i] != 0) { //TC n^2
             for (int j=0; j<n; j++) { //TC {1 + (n^2 + 1) + n^2} = {2n^2 + 2}
                 dividend[j] = dividend[j] / divisor[i]; //TC n^2
             } 
         } 
     } //Combining all, it would be {2n^2 + 2} + n^2(2n^2 + 2) = 2n^3 + 4n^2 + 2 TC
 } //O(n^3)

Would this one be O(N^3) and 2n^3 + 4n^2 + 2? Again, if not, can somebody please explain why?

Upvotes: 0

Views: 369

Answers (3)

Mw1993
Mw1993

Reputation: 26

What you're looking for in big O time complexity is the approximate number of times an instruction is executed. So, in the first function, you have the executable statement:

intMatrix[i][j] = 0;

Since the executable statement takes the same amount of time every time, it is O(1). So, for the first function, you can cut it down to look like this and work back from the executable statement:

i: execute n times{//Time complexity=n*(n+1)/2
    j: execute i times{
        intMatrix[i][j] = 0; //Time complexity=1
    }
}

Working back, the both the i loop executes n times and the j loop executes i times. For example, if n = 5, the number of instructions executed would be 5+4+3+2+1=15. This is an arithmetic series, and can be represented by n(n+1)/2. The time complexity of the function is therefore n(n+1)/2=n^2/2+n/2=O(n^2).

For the second function, you're looking at something similar. Your executable statement is:

dividend[j] = dividend[j] / divisor[i];

Now, with this statement it's a little more complicated, as you can see from wikipedia, complexity of schoolbook long division is O(n^2). However, the dividend and divisor DO NOT use your variable n, so they're not dependent on it. Let's call the dividend and divisor, aka the actual contents of the matrix "m". So the time complexity of the executable statement is O(m^2). Moving on to simplify the second function:

i: execute n times{ //Time complexity=n*(n*(1*m^2))=O(n^2*m^2)
    j: execute n times{ //Time complexity=n*(1*m^2)
        if statement: execute ONCE{ //Time complexity=1*m^2
            dividend[j] = dividend[j] / divisor[i]; //Time complexity=m^2
        }
    }
}

Working backwards, you can see that the inner statement will take O(m^2), and since the if statement takes the same amount of time every time, its time complexity is O(1). Your final answer is then O(n^2m^2). Since division takes so little time on modern processors, it is usually estimated at O(1) (see this for a better explanation for why this is), what your professor is probably looking for O(n^2) for the second function.

Upvotes: 1

Mikkel L&#248;kke
Mikkel L&#248;kke

Reputation: 3749

Big O Notation or time complexity, describes the relationship between a change in data size (n), and the magnitude of time / space required for a given algorithm to process it.

In your case you have two loops. For each number of n (the outer loop), you process n items (in the inner loop) items. Thus in you have O(n2) or "quadratic" time complexity.

So for small numbers of n the difference is negligible, but for larger numbers of n, it quickly grinds to a halt.

Eliminating 0 from the divisor as in algorithm 2 does not significantly change the time complexity, because checking to see if a number = 0 is O(1) and several orders of magnitude less then O(n2). Eliminating the inner loop in that specific case is still O(n), and still dwarfed by the time it takes to do the O(n2). Your second algorithm, thus technically becomes (best case) O(n) (if there are only zeros in the divisor series).

Upvotes: 1

Pavel Horal
Pavel Horal

Reputation: 18194

Both are O(N2). You are processing N2 items in the worst case.

The second example might be just O(N) in the best case (if the second argument is all zeros).

I am not sure how you get the other polynomials. Usually the exact complexity is of no importance (namely when working with higher-level language).

Upvotes: 2

Related Questions