Reputation: 503
I'm learning about big O notation and I'm kind of confused. I don't think I really get how "complexity" effects an algorithm, and I can't tell if I'm looking at things backwards.
is the order of complexity from least to most complex: O(1), O(log n), O(n), O(n log n), O(n^2)? or do I have it backwards?
If this is the correct order, I would guess that a program with a complexity of O(n^2) would process large sets of data faster than a program with O(n log n). However after testing a bubble sort(O(n^2)) and a quick sort (O(n log n)), it's obvious that the O(n^2) sort processed WAY slower than O(n log n). So I'm confused.... is complexity good or bad? Will an algorithm be fast(in terms of how long it takes to complete the program) if it's more complex, or will a complex program will be slower?
Upvotes: 1
Views: 6357
Reputation: 19356
O notation means the number operations that a computer will do with the maximum number of elements that have the object that the algorithm will process. For example, it you sort an array with 3 elements with an algorithm that has O(n^2), it means that the computer will do a maximum of 9 operations over the array.
Here you can look at a comparison between different O complexities:
Upvotes: 4
Reputation: 5780
To answer your first question, yes this is the correct order of complexity. Big O notation does not tell you how long a program or algorithm takes to run. It simply measures its performance with respect to another approach with a better complexity or worse complexity. To help you conceptualize it, O(1) would be a single instruction, O(n) would be a single loop, and O(n^2) would be a loop with a nested loop inside.
To answer your question about O(n^2) being faster than O(n log n), it would probably not be. I say probably because there are many factors which determine a program's speed and you could conceivably have a program with a worse complexity perform faster than a program with a better complexity. In other words, Big O notation measures the complexity of an algorithm, and not the program which makes up that algorithm.
For example, suppose for a second that you have two programs, one runs in O(n) time and the other runs in O(n^2) time for simplicity's sake.
Running for 10 objects, the program with O(n) time results in 100 milliseconds. Running the program with O(n^2) time results in 10 milliseconds. Why is that? Because it takes the program with O(n^2) time less time to perform its job than the first program.
But lets see what happens with 100 objects. The program with O(n) time results then in 1000 milliseconds while the program with O(n^2) time results in 1000 milliseconds as well. It grows much quicker for larger objects, which is why, generally speaking, it is better to optimize complexity.
Some algorithms cannot have their complexity reduced, such as a variant of the traveling salesman problem, which asks if there exists a path of a certain length to pass through all the cities he must visit. To have a guaranteed best solution, you must run every possible scenario, which has a Big O notation of O(n!) which is as bad as it gets. Fortunately, there are many alternative algorithms which can determine an solution within 98% which can be performed much faster. The set of known problems which run in O(n!) time are called NP problems.
I hope that answers your question.
Upvotes: 2