Reputation: 291
As a class assignment, I am to write a C program to generate all Pythagorean triples lower than a given value 't'. Here's my code, which first generates a primitive triplet (a, b, c) using Euclid's Formula, and prints all triplets of the form (ka, kb, kc) for 1 < kc < t.
for (i = 2; i < (sqrt(t) + 1); i++)
for (j = 1; j < i; j++)
if ((gcd(i,j) == 1) && ((i-j) % 2) && ((i*i + j*j) < t))
{
k = 0;
a = i * i - j * j;
b = 2 * i * j;
c = i * i + j * j;
while ((++k) * c < t)
printf("(%d, %d, %d)\n", k*a, k*b, k*c);
}
Most other algorithms that I came across use nested loops to check sum of squares, and are significantly slower than this as t grows. Is it possible to deduce a proof that it is indeed faster?
Upvotes: 5
Views: 3515
Reputation: 17876
As an alternative to Euclid's algorithm, if (a, b, c) is a primitive pythagorean triple, so are (a-2b+2c, 2a-b+2c, 2a-2b+3c), (a+2b+2c, 2a+b+2c, 2a+2b+3c) and (-a+2b+2c, -2a+b+2c, -2a+2b+3c). Here's the algorithm in Python (because I just happened to have the algorithm in Python, and I'm too lazy to rewrite it in C, and anyway, it's your homework):
def pyth(n):
def p(a, b, c):
if n < a + b + c: return []
return ([[a, b, c]] if a < b else [[b, a, c]]) \
+ p(a-b-b+c+c, a+a-b+c+c, a+a-b-b+c+c+c) \
+ p(a+b+b+c+c, a+a+b+c+c, a+a+b+b+c+c+c) \
+ p(c+c+b+b-a, c+c+b-a-a, c+c+c+b+b-a-a)
return p(3, 4, 5)
Then it is easy to multiply each primitive triangle by successive constants until you reach the limit. I'm not sure if this is faster than Euclid's algorithm, but I'm hopeful that it is because it has no gcd calculations.
Upvotes: 1
Reputation: 4411
Algorithm complexity is the general method to analyze algorithmic performance. In particular, big O is commonly used to compare algorithms based on the worst case situation of each one of them.
In you case you have 4 loops:
for
that iterates thorough i
for
that iterates thorough j
gcd
while
loopIn the worst case each of these loops performs sqrt(t)
iterations. A big O complexity would be:
O(for_i) * O(for_j) * (O(gcd) + O(while))
=
O(sqrt(t)) * O(sqrt(t)) * (O(sqrt(t)) + O(sqrt(t)))
=
O(t*sqrt(t))
For the other algorithms that are slower than your method. You can apply a same reasoning to find their big O then show that this big O is greater than yours. For example the naive algorithm that checks all sums of squares will have 2 nested loops; each has at most t
iterations and therefore the big O is O(t*t) > O(t*sqrt(t))
.
Upvotes: 3