Water Cooler v2
Water Cooler v2

Reputation: 33880

What would you call the time complexity of an algorithm of this sort?

I am using C# syntax but this question is not specific to C# only.

Example 1

public static long Do(long n)
{
    var sqrt = (long)Math.Sqrt(n);

    for(long i = 0; i < sqrt; i++)
        // do something

    return result;
}

Would that still be linear time even though even in the worst case, we're doing the operation for only the square root times of n, which is a very small fraction of n?

Example 2

And how would you classify the time-complexity of the algorithm below?

public static long Do(long n)
{
    while (n > 1)
    {
        n = (long)Math.Sqrt(n);

        // do something
    }

    return result;
}

Would this be called an operation done in logarithmic time in the worst case, even though we're, once again, not just halving the number of iterations each time but reducing them by an order of magnitude more than just the half.

Upvotes: 7

Views: 1158

Answers (4)

user835611
user835611

Reputation: 2376

The first snippet has complexity O(sqrt(n)).

The second snippet has complexity O(log(log(n)). It only takes a little 10th grade algebra to solve the equation n^1/(2^i) < 2 for i, which is the complexity of the second snippet.

Upvotes: 0

displayName
displayName

Reputation: 14399

Linear Time: means that for an input of size n, the time complexity will be proportional to n itself. In other words, time complexity for a problem of size n = k * n (Guess what quadratic time means? For an input of size n, the time to run is proportional to n2... of time complexity = c * n2)

  • For your first snippet, for an input of size n, the running time will be proportional to sqrt(n). Therefore running time can be written as a * sqrt(n) which is not the same function as k * n. The word used to describe such proportionality is just Square Root Relation.
  • For your second snippet, you keep taking the square root of n until it reduces to 1. The recurrence for that can be written as:
    • T(n)= T(sqrt(n)) + 1
    • To solve such a recurrence, take n = 22i. Now, i will actually be the number of times that code snippet will run, resulting in double log of n or Log(Log(n)) as the run time for this snippet.

To solve your second snippet, I actually asked this question on Math SE.

Upvotes: 0

Margaret Bloom
Margaret Bloom

Reputation: 44146

Complexity is measured with respect of the length of the encoding of the problem instance.
The problem in both snippets is fully described by the parameter n, so an instance is a string <n> that encodes n.
We can use any non degenerate encoding, a common choice in the literature is writing n in any numeric positional system.
For example if n is 13 we can use "13" or "1101" or "15" or "D" as <n>.

It is easy then to see than the length of <n>, here denoted as |n|, is log(n).
It is not exactly log(n), we need to know the base to give an exact length, however we don't need an exact length function L(n) that given n returns |n| exactly, we just an O(L(n)) as we will use big-O notation anyway.
We can change base to any log by multiplying by a constant, so using L(n) = log(n) would do.
However if we use don't specify the base when it comes to expressing n in function of |n| we can only say that n = 2O(|n|) which will make the formulas too crowded.
Since we loss no generality by specifying a base, we can choose base two, thereby |n| = log2(n) and n = 2|n| in the limit (as n grows).

You can refer to any introductory book on the theory of complexity if anything above is new to you.

Snippet 1

To keep the analysis very simple, we suppose that each iteration take constant time.
Then the total times is the number of iterations which are n1/2.
Since n1/2 = 2|n|/2 we have that the time complexity is

2O(|n|), Exponential

Snippet 2

At the iteration k the original value of n has been reduced to n1/(2k). We then want to find k such that

n1/2k ≤ 1

so that the loop ends.
But that condition can never be satisfied for any finite value of k (prove this as an exercise).
However the partial result is cast to an integer, without reverting to the floor function we can just require n1/2k ≤ 2 since once n1/2k is two, at the next iteration it will be cast to one.

n1/2k ≤ 2 => n ≤ 22k => log2(log2(n)) ≤ k

Since log2(n) is log2(2|n|) = |n| the time complexity is

O(log2(|n|)), Logarithmic


More strong statements, in terms of big-tetha, can be made but that's left to the reader.

Upvotes: 0

alexeykuzmin0
alexeykuzmin0

Reputation: 6440

The first code snippet contains only one loop and a constant number of operations outside of this loop. If this loop iterates k times while each iteration takes t time, it's complexity is O(kt). Here, k is equal to sqrt(n) which means that if the loop contains no non-constant time operations (say, if it does not contain nested loops or recurrent functions, etc), then this snippet time complexity is equal to O(sqrt(n)) which is also written as O(√n).

The fact that there's a loop here does not mean anything in terms of complexity. For example, the following code, having two nested loops, has a linear complexity:

int j = 0;
for (int i = 0; i < n; ++i)
{
    for (; j < n; ++j)
    {
        // A loop with constant-time operations and eventual breaks
    }
}

In this example, i goes from 0 to n, thus we spend O(n) time on increasing i. Similarly, j goes from 0 to n, and we do O(n) increments of j variable as well as O(n) iterations of the inner loop body. Since we have no other operations in this code, the total complexity is O(n) + O(n) + O(n) = O(n).

To deal with the second example, I rewrite it in recursive manner:

int Do(int n)
{
    // Do something with constant-time compexity
    return n > 1 ? Do(sqrt(n)) : result;
}

Let us call time complexity of this example as T(n). We can see that T(n) = 1 + T(sqrt(n)) where the time of calculation of first part of this function (which is constant) is taken as a time unit. Solving this recursive equation gives us T(n) = log log n (the logarithm here is binary). Indeed, 1 + log log(sqrt(n)) = 1 + log ((log n) / 2) = 1 + log log n - 1 = log log n. For asymptotic expressions it does not really matter which base of logarithm you use, since log_a x = log_a b * log_b x = O(log_b x), that's why typically the logarithm base is omitted.

So, the complexities are: O(√n) and O(log log n).

UP: To non-strictly estimate your complexities, one may use Excel or any other software tool for calculation. You need just build a table of numbers of operations for different values of n and try to guess a complexity rule. For example, for code snippet #2 from the question:


N  Operations  log n  log log n 
1       1        0        -  
2       2        1        0
4       3        2        1
16      4        4        2
256     5        8        3
65536   6        16       4

The correct answer is typically obvious from the table

Upvotes: 1

Related Questions