poddroid
poddroid

Reputation: 855

Why to ignore the constants in computing the running time complexity of an Algorithm

Can someone explain the reason behind ignoring the constants in computing the running time complexity of an Algorithm please?

Thanks

Upvotes: 4

Views: 4127

Answers (4)

cammando
cammando

Reputation: 616

As the value of n grows in equation 6n^2 + 100n + 300 the constant becomes of less relevance and we tend to ignore that. See the following image

enter image description here

Picture taken from Asymptotic Notation section at Khan Academy.

Upvotes: 2

davin
davin

Reputation: 45545

When analysing time complexity constants are difficult and irrelevant to calculate.

On some architecture addition might take twice as long as multiplication, so now we have to go through the algorithm and calculate the number of additions we make, and the number of multiplications we make in order to get an accurate runtime analysis. Not pretty!

More importantly, even if that is true now, at some point in the future, or on another slightly different architecture, that constant may be different, so the runtime would vary between architectures. So now my algorithm has more than one runtime? One for now on this architecture, and another for another architecture, and each of those may change in the future... Again, not pretty.

In a world where the capabilities of computing change all the time, twice the CPU capabilities tomorrow, four times the memory in a week, etc. there is little relevance to constant factors. This isn't the case if we need to quantify real-life runtime, but when we are analysing the complexity of an algorithm in general, not the complexity of a solution in a specific environment it is the case.

Also, and probably most importantly, the constant factors are not a good measure of the complexity of the problem, and at the end of the day we are trying to measure complexity. An algorithm that is of a specific complexity class, behaves (or more accurately, is bounded) a certain way for all size inputs. So when I try and measure the general complexity of two solutions, I do so in as general a way as possible, and therefore try and consider all values of input size (i.e. n->infinity).

Finally, It allows theoreticians to group algorithms into the same class, irrespective of certain constant factors that may or may not change and may or may not be improved. This is helpful for, among other reasons, optimality proofs; finding that a problem is omega(f(n)) is only useful if we consider algorithms in the complexity class of O(f(n)), irrespective of the constants.

Upvotes: 5

sam hocevar
sam hocevar

Reputation: 12129

You only ignore constants when doing rough estimates. Most of the time, it's a valid simplification: when comparing two algorithms for large input dimensions, such as sorting an array, O(n log n) will eventually be faster or smaller than O(n²) no matter what.

However, when two algorithms have the same complexity, or when the expected dataset is so small that the asymptotic behaviour is not a valid real world scenario, then the constant is definitely important. For instance, a bubble sort implementation is likely to perform faster on an array of 3 or 4 values than quicksort.

Upvotes: 3

amit
amit

Reputation: 178491

because while comparing very large values, (let's call it n)... n^2 will be higher then k*n for any k, where n->infinity. this is of course true for any power/exponent of n.

note that in real life projects, you do make these optimizations and try to minimize the constants, but usually they are less siginificant then the degree of the polynom

Upvotes: 5

Related Questions