user5803705
user5803705

Reputation:

Example of an algorithm that is too time complex?

As much as i've searched online, I cannot find examples of algorithms that aren't solvable in practice due to the amount of time that they'd take to compute.

I was trying to think of an example such as counting the number, size, and location of each star that passes closest to a rocket ship on it's trip to alpha centauri. Would this be a good example? I mean, the star system is nearly 26 trillion miles away.

EDIT: I'm doing a short presentation on Big-O and Little-O notation and wanted to think of something out of the ordinary as to WHY solutions to problems can be solvable in practice, but they may not be solvable in principle due to the extremely large amount of time to compute, thus why Big-O is used to create estimations. The reason I wanted to go with stars is because it seems more interesting than some other subjects.

Thanks!

Upvotes: 1

Views: 588

Answers (7)

J. Song
J. Song

Reputation: 105

You should definitely look into NP-hard problems, like the Travelling Salesman Problem. These are great teaching examples because they can be "solved" with different strategies and we're still working on them.

The algorithms that guarantee a correct answer are often too resource-intensive to implement in real life, as you describe. Thanks to Big O, we know that O((n^2)*(2^n)) is just not feasible to run on machines for decently sized inputs.

We are forced to compromise with algorithms that perform better, but may not give the best or correct result. These algorithmic compromises can be seen in a lot of relatable, real life examples - one neat case is generating a Tour of 50 USA Landmarks.

Upvotes: 1

YSharp
YSharp

Reputation: 1086

With state-of-the-art algorithms as of 2009, it took two years to factor RSA-768, which has "only" 232 decimal digits -- and that was done by using a bunch of computers in parallel.

One can speculate how long it would take to factor RSA 2048, which has 617 decimal digits.

Upvotes: 0

Funny Geeks
Funny Geeks

Reputation: 495

A german hacker is trying to log on to the NSA's servers to find out the nuclear missile launch codes. He knows all NSA passwords must be at least 16 characters, the admin password is randomly generated every day at midnight, and it can include all of ASCII. He has 26 days before WWIII starts.

Should he try to save the world? or go on vacation. Only big O notation will tell.

I hope this example was "interesting".

Upvotes: 0

dwks
dwks

Reputation: 602

The main purpose of Big-Oh notation isn't to estimate a program's runtime on some particular input. It's to analyze how quickly the program slows down as you increase its input size. Does an input twice the size take twice as long to run, four times as long, or sixteen times?

The most illustrative example for Big-Oh is where the problem has a very small input but takes a long time to run. And when you increase the input size just a little bit, it takes far longer yet. Some of the fastest growing algorithms take exponential or factorial time, and they usually involve looking at all permutations of elements (all different orders, or ways the elements can be arranged).

Hence, a good example is, someone set a 10-digit passcode on a safe (digits from 0-9) and you have to guess it. Well, there are 10^10 combinations, and you'll guess an average of 5*10^9 times before getting it right. But if it's an 11-digit passcode, guessing will take you ten times longer.

Upvotes: 0

pkacprzak
pkacprzak

Reputation: 5617

What you should really look for is not the complexity of algorithm, but complexity of the underlying problem - one can use very inefficient algorithm for sorting, for example iterating over all possible O(n!) orders, but it doesn't mean that sorting takes much time in practice.

Let's consider one of the most fundamental string problems Longest common subsequence of two strings of lengths n.

It is known that the problem can be solved in O(n^2) using dynamic programming method. On the other hand, it is also proven (paper) that for general instance of the problem, any algorithm requires Ω(n^2) operations (faster algorithms are possible in some special cases).

Thus, if you want to solve a general instance of this problem for strings of million of characters (nothing very big for modern computing), in fact, any algorithm will take time proportional to 10^12 operations. In fact, in computational biology, problem of finding Longest common subsequence of DNA sequences if very important and these sequences are extremely long, so it is a good example to real-world problem you asked for.

Upvotes: 0

Thomas Kilkelly
Thomas Kilkelly

Reputation: 125

Your problem isn't amazing because it looks like it is of linear complexity (O(n) time). If you doubled the length of your trip the length of time it takes your program to execute only doubles. Still, you're probably just looking for a problem with an exponential solution like the travelling salesman problem which quickly becomes practically unsolvable as you increase the number of cities. Look up Time Complexity for more information.

Another interesting thing you could try looking at is the halting problem.

Upvotes: 0

Lorenzo
Lorenzo

Reputation: 1885

Imagine you have an array denoting a deck of 52 cards [AS, 2S, 3S, ... , JH, QH, KH] and you want to return all the different ways these cards can be shuffled.

The first card has 52 possibilities, the second card has 51, the third has 50, and so on. There are a total of 52 factorial (52 * 51 * 50 * ... * 3 * 2 * 1) combinations, which is over 10^67 (close to the number of atoms in the universe I think).

A lot of complex problems arise from having to reuse the same data in many different ways, not just having a lot of data.

Upvotes: 2

Related Questions