No Name QA
No Name QA

Reputation: 784

Why do we prefer not to specify the constant factor in Big-O notation?

Let's consider classic big O notation definition (proof link):

O(f(n)) is the set of all functions such that there exist positive constants C and n0 with |g(n)| ≤ C * f(n), for all n ≥ n_0.

According to this definition it is legal to do the following (g1 and g2 are the functions that describe two algorithms complexity):

g1(n) = 9999 * n^2 + n ∈ O(9999 * n^2)

g2(n) = 5 * n^2 + N ∈ O(5 * n^2)

And it is also legal to note functions as:

g1(n) = 9999 * N^2 + N ∈ O(n^2)

g2(n) = 5 * N^2 + N ∈ O(n^2)

As you can see the first variant O(9999*N^2) vs (5*N^2) is much more precise and gives us clear vision which algorithm is faster. The second one does not show us anything.

The question is: why nobody use the first variant?

Upvotes: 0

Views: 176

Answers (1)

einpoklum
einpoklum

Reputation: 131960

The use of the O() notation is, from the get go, the opposite of noting something "precisely". The very idea is to mask "precise" differences between algorithms, as well as being able to ignore the effect of computing hardware specifics and the choice of compiler or programming language. Indeed, g_1(n) and g_2(n) are both in the same class (or set) of functions of n - the class O(n^2). They differ in specifics, but they are similar enough.

The fact that it's a class is why I edited your question and corrected the notation from = O(9999 * N^2) to ∈ O(9999 * N^2).

By the way - I believe your question would have been a better fit on cs.stackexchange.com.

Upvotes: 4

Related Questions