Rockapella
Rockapella

Reputation: 53

Time complexity (Pythagorean Triples)

I have written two different algorithms to calculate Pythagorean Triples:

import java.util.*;

class Untitled {
  public static void main(String[] args) {

    int n = 20;

    System.out.println("--------------------");
    algorithmOne(n);
    System.out.println("--------------------\n");
    algorithmTwo(n);
    System.out.println("--------------------");
  }

  public static void algorithmOne(int n){

    long startTime = System.nanoTime();

    for (int i = 1 ; i <= n ; i++) {
      for (int j = 1 ; j <= n ; j++) {
        for (int k = 1; k <= n ; k++) { 
          if (Math.pow(i,2) + Math.pow(j,2) == Math.pow(k,2)) {
            System.out.println(i + "," + j + "," + k);
          }
        }
      }
    }

    System.out.println("Run Time: " + (System.nanoTime() - startTime)/1000000 + " milliseconds");
  }

  public static void algorithmTwo(int n){

    long startTime = System.nanoTime();

    ArrayList<Integer> squares = new ArrayList<>();

    // O(n)
    for(int i = 1 ; ; i++) {
      if ((int)Math.sqrt(i) > n) {
        break;
      }
      if (Math.sqrt(i) == (int)Math.sqrt(i)) {
        squares.add(i);
      }
    }

    // O(n^3)
    for (int i = 0 ; i < squares.size() ; i++) {
      for (int j = 0 ; j < squares.size() ; j++) {
        for (int k = 0 ; k < squares.size() ; k++) {
          if (squares.get(i) + squares.get(j) == squares.get(k)) {
            System.out.println((int)Math.sqrt(squares.get(i)) + "," + (int)Math.sqrt(squares.get(j)) + "," + (int)Math.sqrt(squares.get(k)));
          }
        }
      }
    }

    System.out.println("Run Time: " + (System.nanoTime() - startTime)/1000000 + " milliseconds");

  }
}

I believe both algorithms are O(n^3), however when I calculate the time they take to run, the second algorithm is a lot faster. using n=20, algorithm1 takes about 60 milliseconds and algorithms takes about 5 milliseconds. How can these two algorithms have the same time complexity, but one runs faster than the other? I understand that the second algorithm doesn't have to iterate over as many numbers in the triple for loop, but shouldn't that mean that the time complexity would be less?

Upvotes: 0

Views: 359

Answers (2)

Yakov Galka
Yakov Galka

Reputation: 72479

The big-O notation "hides the constant". Two algorithms, one that runs in 5n^3 milliseconds and another that runs in 5000000n^3 milliseconds would both have complexity O(n^3), but the second one would be million times slower. This is why big-O notation does not tell the entire story. For example there's a lot of different sorting algorithms that are O(N log N), yet some of them are faster than others, or faster than others on specific inputs, etc... There's more to performance than a basic introductory book to algorithms would tell you.

As far as your code goes, it seems that accessing a value within an array is faster than calculating Math.pow, therefore the version that precomputes the squares is faster overall. However, I guess that Math.pow in java is costlier than a simple integer multiplication. I would try replacing it with simple multiplication: i*i + j*j == k*k and see if there's any significant difference after that. Even if it's still slower than the 2nd algorithm, it's worth knowing that re-calculating some value may be faster than fetching it from memory in some circumstances.


Another unrelated thing I noticed in your code, is that you assume that the complexity of your square-calculating loop is O(n):

// O(n)
for(int i = 1 ; ; i++) {
  if ((int)Math.sqrt(i) > n) break;
  ...
}

This isn't true though. You loop while sqrt(i) <= n, that is i <= n*n. Therefore the loop is executed n^2 times, giving a complexity of O(n^2) for that loop. The following loop does indeed have O(n) complexity:

// O(n)
for(int i = 1 ; i<=n; i++) {
   squares.add(i*i);
}

It will run faster, but won't make much of a different due to the overall O(n^3) complexity of the entire algorithm.

Upvotes: 1

rng70
rng70

Reputation: 25

You have to understand complexity first.

Lets take an example.

Suppose you have a bag that can contain 5 kg apple but doesn't matter how many(in number) apples are there. It will contain 5 kg but it can be 20 apple or it can be 5 apple.

So, when we talk about time-complexity then you talk about the amount (in kg) not the number of apple(it can be memory-complexity).

That means when you are talking about big-Oh then you have to know what it means. by O(n3) means it takes at most that amount. So, it can compute every steps but at most n3 or it can skip some steps but still it will take at most n3. I think, now it is clear to you why I am using at most.

Your second algorithm is skipping some steps but your first algorithm is not. So you second algorithm runs a little bit faster but in big-Oh sense it can take at most that amount (O(n3)) and still size of the lists matters.

Additional info:

That is called Pythagorean triple. O(|result|) solution exists. 
You can look it up on Wikipedia.
Also learn Stern-Brocot tree that is the essential part to write the most optimal case.
To learn the theory, search rational points on circle. 
Not an easy topic but helps if you know a little geometry.

Upvotes: 0

Related Questions