User
User

Reputation: 1726

What is the big-O complexity of this algorithm?

I have a function that I wrote below. This function is essentially a merge-sort.

public static long nlgn(double[] nums)  {

        if(nums.length > 1)     {
            int elementsInA1 = nums.length/2;
            int elementsInA2 = nums.length - elementsInA1;
            double[] arr1 = new double[elementsInA1];
            double[] arr2 = new double[elementsInA2];

            for(int i = 0; i < elementsInA1; i++)
            arr1[i] = nums[i];

            for(int i = elementsInA1; i < elementsInA1 + elementsInA2; i++)
            arr2[i - elementsInA1] = nums[i];

            nlgn(arr1);
            nlgn(arr2);

            int i = 0, j = 0, k = 0;

            while(arr1.length != j && arr2.length != k) {
                if(arr1[j] <= arr2[k]) {
                    nums[i] = arr1[j];
                    i++;
                    j++;
                } else {
                    nums[i] = arr2[k];
                    i++;
                    k++;
                }
            }

            while(arr1.length != j) {
                nums[i] = arr1[j];
                i++;
                j++;
            }
            while(arr2.length != k) {
                nums[i] = arr2[k];
                i++;
                k++;
            }
        }

        return nuts;
    }

Since this is a merge sort, I know from my research that the big-O complexity of this algorithm is O(n lgn). However, when I run my timing tests, the results I get do not suggest that this is running in O(n lgn) time. It seems like it is O(n lgn) time though, because up until we get to the end of the two for loops at the beginning. it runs in O(n) time. Once past that, it should be running in O(lgn) time as it sorts each element.

My question is, can somebody confirm that this piece of code is running in O(n lgn) time? If not, I would like to know where I am going wrong in my understanding.

Upvotes: 3

Views: 143

Answers (2)

code_dredd
code_dredd

Reputation: 6085

Since this is a merge sort, I know from my research that the big-O complexity of this algorithm is O(n lgn). [...] My question is, can somebody confirm that this piece of code is running in O(n lgn) time?

No need to show it, because merge sort has already been proven to run in O(n lg(n)) time. But if you'd like to observe it, you'll need to experiment with increasingly large values for your inputs. You might want to update your post with your input values and timing results.

However, when I run my timing tests, the results I get do not suggest that this is running in O(n lgn) time. [...] If not, I would like to know where I am going wrong in my understanding.

I think you may be misunderstanding what Big-Oh notation actually tries to tell you. Big-O gives you an approximation of the asymptotic upper bound of the algorithm as the inputs become large enough. (How "large" is "large enough" will vary from algorithm to algorithm and would need to be found by experimentation. The point is that this value does exist and we represent it more abstractly.)

In other words, Big-O tells you what the worst case performance of the algorithm could be as N becomes very large. Since this is the worst case scenario, it also means that, it could perform better under some circumstances, but we don't generally care about those. (Look into Big-Omega and Big-Theta if you're interested.) For example, if you have a "small-enough" list, merge-sort can run faster than quick-sort and this is often used as an optimization.

It's also an approximation because the constants and other polynomial terms are not shown as part of the notation. For example, some hypothetical algorithm with a time complexity of 500x^2 + 15x + 9000 is going to be written as O(n^2).

Some reasons for dropping the lower terms include:

  • Relative Size: As n tends towards positive infinity, the larger n^2 term dominates; the lower terms contribute less and less to the overall cost in comparison to the largest/dominating term --like adding a few drops or buckets of water into a lake;
  • Convenience: Reading and understanding O(n^2) is easier than a long and more complicated polynomial for no real benefit

Upvotes: 1

Euclid Ye
Euclid Ye

Reputation: 521

O(nlogn) is an asymptotically tight bound. That means, only when n is large enough, its running time is close to the complexity. When n is small, because of function call overhead and many other factors, the bound is not tight.

You can make n larger, and compare the ratios between inputs to see if it is close to O(nlogn). Although I really doubt how large you have to make n to be...

Upvotes: 3

Related Questions