James P.
James P.

Reputation: 19607

Algorithms: explanation about complexity and optimization

I'm trying to find a source or two on the web that explain these in simple terms. Also, can this notion be used in a practical fashion to improve an algorithm? If so, how? Below is a brief explanation I got from a contact.

I dont know where you can find simple explanation. But i try to explain you. Algorithm complexity is a term, that explain dependence between input information and resourses that need to process it. For example, if you need to find max in array, you should enumerate all elements and compare it with your variable(e.g. max). Suppose that there are N elements in array. So, complexity of this algorithm is O(N), because we enumerate all elements for one time. If we enumerate all elements for 2 times, complexity will be O(N*N). Binary search has complexity O(log2(N)), because its decrease area of search by a half during every step. Also, you can figure out a space complexity of your algorithm - dependence between space, required by program, and amount of input data.

Upvotes: 0

Views: 745

Answers (6)

Mike Dunlavey
Mike Dunlavey

Reputation: 40659

By all means, understand data structures, algorithms, and big-O.
Design code carefully and well, keeping it as simple as possible.

But that's not enough.

The key to optimizing is knowing how to find problems, after the code is written.

Here's an example.

Upvotes: 1

Saeed Amiri
Saeed Amiri

Reputation: 22555

It's not easy to say all things about complexity, but I think wiki has a good explanation on it and for startup is good, see:

  1. Big O notation for introducing this aspect (Also you can look at teta and omega notations too).
  2. Analysis of algorithm, to know about complexity more.
  3. And Computational Complexity, which is a big theory in computer science.

and about optimization you can look at web and wiki to find it, but with five line your friends give a good sample for introduction, but these are not one night effort for understanding their usage, calculation, and thousand of theory.

In all you can be familiar with them as needed, reading wiki, more advance reading books like Gary and Johnson or read Computation Complexity, a modern approach, but do not expect you know everything about them after that. Also you can see this lecture notes: http://www.cs.rutgers.edu/~allender/lecture.notes/.

Upvotes: 2

slebetman
slebetman

Reputation: 113876

Also, can this notion be used in a practical fashion to improve an algorithm? If so, how?

It's not so much used for improving an algorithm but evaluating the performance of algorithms and deciding on which algorithm you choose to use. For any given problem, you really want to avoid algorithms that has O(N!) or O(N^x) since they slow down dramatically when the size of N (your input) increases. What you want is O(N) or O(log(N)) or even better O(1).

O(1) is constant time which means the algorithm takes the same amount of time to execute for a million inputs as it does for one. O(N) is of course linear which means the time it takes to execute the algorithm increases in proportion to its input.

There are even some problems where any algorithm developed to solve them end up being O(N!). Basically no fast algorithm can be developed to solve the problem completely (this class of problems is known as NP-complete). Once you realize you're dealing with such a problem you can relax your requirements a bit and solve the problem imperfectly by "cheating". These cheats don't necessarily find the optimal solution but instead settle for good enough. My favorite cheats are genetic/evolutionary algorithms and rainbow tables.

Another example of how understanding algorithmic complexity changes how you think about programming is micro-optimizations. Or rather, not doing it. You often see newbies asking questions like is ++x faster than x++. Seasoned programmers mostly don't care and will usually reply the first rule of optimization is: don't.

The more useful answer should be that changing x++ to ++x does not in any way alter your algorithm complexity. The complexity of your algorithm has a much greater impact on the speed of your code than any form of micro-optimization. For example, it is much more productive for you to look at your code and reduce the number of deeply nested for loops than it is to worry about how your compiler turns your code to assembly.

Yet another example is how in games programming speeding up code counter-intuitively involve adding even more code instead of reducing code. The added code are in the form of filters (basically if..else statements) that decides which bit of data need further processing and which can be discarded. Form a micro-optimizer point of view adding code means more instructions for the CPU to execute. But in reality those filters reduce the problem space by discarding data and therefore run faster overall.

Upvotes: 1

ruslik
ruslik

Reputation: 14870

You can watch Structure and Interpretation of computer programs. It's a nice MIT course.

Upvotes: 1

templatetypedef
templatetypedef

Reputation: 372784

The course reader used in Stanford's introductory programming classes has a great chapter on algorithmic analysis by legendary CS educator Eric Roberts. The whole text is online at this link, and Chapter 8 might be worth looking at.

Upvotes: 1

grossvogel
grossvogel

Reputation: 6782

As your friend hinted, this isn't a simple topic. But it is worth investing some time to learn. Check out this book, commonly used as a textbook in CS courses on algorithms.

Upvotes: 1

Related Questions