Reputation: 8801
Sometimes I get totally fooled trying to estimate an algorithm's speed with the O(x) notation, I mean, I can really point out when the order is O(n) or O(mxn), but for those that are O(lg(n)) or O(C(power n)) I think that I am missing something there... So, what are your tips and tricks for an easy estimate with a fast overlook on an algorithm?
As an example of what I am looking for, here are some for the easy ones (can be wrong, but trying my best):
Thanks in advance.
Upvotes: 6
Views: 2083
Reputation: 8801
I am not a big fan of answering my own questions, but I found this today and remind me this SO question.
Upvotes: 0
Reputation: 10184
Usually something like O(logN) comes about because the data is size N, but it is organized e.g. in a tree where the depth of the tree is logN. If the typical search involves going from the root to the leaf (in worse case) then it is easy to see that the algorithm will be O(logN).
There are no hard and fast rules - you just have to look at each case, figure out the worse case scenario, and calculate what the cost of that would be.
Upvotes: 1
Reputation: 25512
the asymptotic complexity of algorithms is important in practice, and here are some of rules of thumb I use when I review mine or other people's code. Usually practical computer algorithms are functions of many variables and nontrivial data structures, but let us assume (just for illustration) that our algorithm f takes basically a single integer X as its argument, and we want to find the asymptotic complexity of f in terms of X. Suppose f(0) is trivial. Then in general:
Some special cases:
Algorithms which grow strings or memory areas by appending to them, do in many languages incur O(X**2) overhead, such as
for (i = 0; i < X; i++) { s += "foo"; } // quadratic
This typical nested loop has also X**2 overhead:
for (i = 0; i < X; i++) { for (j = 0; j < i; j++) { ... } } // quadratic
C++ STL containers like std::set and std::map have O(log X) overheads for almost all operations
Just my 2 cents! Hope this helps!
Upvotes: 1
Reputation: 405675
O(lg(n)): If your problem gets smaller by some proportion of n (often n/2) at each step of your algorithm and each step does a constant amount of work. Binary search is a good example since each step cuts your problem size in half by doing a constant amount of work (calculate the mid-point and do one comparison).
Note that the n is on the top of that proportion. This is not the same as your problem size reducing by 1/n at each step. :)
Upvotes: 4
Reputation: 15517
The Wikipedia article on Big O Notation has a nice chart of the orders of common functions.
Upvotes: 1
Reputation: 426
If you're looking for a quick way to estimate the runtime of an algorithm, the other answers are good. If you want a more elaborate answer, I suggest you look at the "Master theorem". In the German article, there is a nice table to that.
Edit: John D. Cook has made a nice recap to the master theorem, see the Link his answer.
Upvotes: 1
Reputation: 30089
Here's a blog post that might help:
The cost of breaking things down and putting them back together
That post explains the "master theorem" for working with big-O orders.
Upvotes: 7
Reputation: 532445
Recursive, divide-and-conquer algorithms are often O(logN). An algorithm that loops over a divide-and-conquer would be O(NlogN).
Upvotes: 7