user9398286
user9398286

Reputation:

Are O(n log n) algorithms always better than all O(n^2) algorithms?

When trying to properly understand Big-O, I am wondering whether it's true that O(n log n) algorithms are always better than all O(n^2) algorithms.

Are there any particular situations where O(n^2) would be better?

I've read multiple times that in sorting for example, a O(n^2) algorithm like bubble sort can be particularly quick when the data is almost sorted, so would it be quicker than a O(n log n) algorithm, such as merge sort, in this case?

Upvotes: 0

Views: 4780

Answers (4)

SimoV8
SimoV8

Reputation: 1402

No, O(n log n) algorithms are not always better than O(n^2) ones. The Big-O notation describes an upper bound of the asymptotic behavior of an algorithm, i.e. for n that tends towards infinity.

In this definition you have to consider some aspects:

  1. The Big-O notation is an upper bound of the algorithm complexity, meaning that for some inputs (like the one you mentioned about sorting algorithms) an algorithm with worst Big-O complexity may actually perform better (bubble sort runs in O(n) for an already sorted array, while mergesort and quicksort takes always at least O(n log n));
  2. The Big-O notation only describes the class of complexity, hiding all the constant factors that in real case scenarios may be relevant. For example, an algorithm that has complexity 1000000 x that is in class O(n) perform worst than an algorithm with complexity 0.5 x^2 (class O(n^2)) for inputs smaller than 2000000. Basically the Big-O notation tells you that for big enough input n, the O(n) algorithms will perform better than O(n^2), but if you work with small inputs you may still prefer the latter solution.

Upvotes: 1

Gassa
Gassa

Reputation: 8846

Here is a practical example. The GCC implementations of sorting functions have O(n log n) complexity. Still, they employ O(n^2) algorithms as soon as the size of the part being sorted is less than some small constant. That's because for small sizes, they tend to be faster in practice.

See here for some of the internal implementation.

Upvotes: 0

AbcAeffchen
AbcAeffchen

Reputation: 14987

In addition to @cadaniluk's answer:

If you restrict the inputs to the algorithms to a very special type, this also can effect the running time. E.g. if you run sorting algorithms only on already sorted lists, BubbleSort will run in linear time, but MergeSort will still need O(n log n). There are also algorithms that have a bad worst-case complexity, but a good average case complexity. This means that there are bad input instances such that the algorithm is slow, but in total it is very unlikely that you have such a case.

Also never forget, that Big-O notation hides constants and additive functions of lower orders. So an Algorithm with worst-case complexity O(n log n) could actually have a complexity of 2^10000 * n * log n and your O(n^2) algorithm could actually run in 1/2^1000 n^2. So for n < 2^10000 you really want to use the "slower" algorithm.

Upvotes: 0

cadaniluk
cadaniluk

Reputation: 15229

O(n log n) is better than O(n2) asymptotically.

Big-O, Big-Theta, Big-Omega, all those measure the asymptotic behavior of functions, i.e., how functions behave when their argument goes toward a certain limit.

O(n log n) functions grow slower than O(n2) functions, that's what Big-O notation essentially says. However, this does not mean that O(n log n) is always faster. It merely means that at some point, the O(n log n) function will always be cheaper for an ever-rising value of n.

Big O

In that image, f(n) = O(g(n)). Note that there is a range where f(n) is actually more costly than g(n), even though it is bounded asymptotically by g(n). However, when talking limits, or asymptotics for that matter, f(n) outperforms g(n) "in the long run," so to say.

Upvotes: 1

Related Questions