Reputation: 233
Can someone please explain to me in a simple way why constants don't matter when it comes to big O notation? Why does the complexity stay the same when you add a constant. This is not a homework question I just want to understand this better. Let me get this straight big O is for seeing the behavior of a function as it approaches infinity right?
I got it. Thanks so much everyone.
Upvotes: 8
Views: 4330
Reputation:
Let say we are computing "big" factorials all the day.
Complexity of both functions is O(N).
Because the constant 50 is not related to N (we can ignore that gap when N is close to +inf), in theory we can first compute the factorial of 50 it does not matter... in real life better avoid to compute the factorial of 50 before returning the answer.
So happy with our results that tomorrow we will compute bigger factorials.
Upvotes: 0
Reputation: 417
Constants are not important in big-O notation because the concern is scalability.
Upvotes: 1
Reputation: 262684
It does not matter for complexity theory, which is interested only in how the function scales as the input size grows.
A constant does not affect how the function behaves as input sizes grow towards infinity at all.
However, if you are interested in actually running a certain piece of code, you may very well be interested in a large constant overhead and how the function performs for smaller input sizes.
Difference between complexity theory and practice.
Upvotes: 12
Reputation: 54591
In practicality, sometimes constants do matter. But, when we speak of Big O notation, we're looking at asymptotic behavior. The reason a constant doesn't affect the asymptotic behavior is because a function with a faster growing curve will always overtake a function with a slower growing curve even in the presence of a huge constant (though it will take longer to get there, of course).
Thus we say that the constant "doesn't matter", because it can never change the asymptotic relationship between the curves.
Upvotes: 7
Reputation: 2644
Big O explains how the complexity changes as the input gets large. The larger your input gets the less important constants are. For example, multiplying something by 10 is significantly less important than squaring something when n gets to one million or one billion. Big O is not an exact measurement, it's a way to put your algorithm in a rough class of complexity so you can round off the constants because they are not meaningful with huge n values.
Upvotes: 1
Reputation: 3061
The answer is... by definition. If some function f(x)
is big-O of some function g(x)
, it just means that f(x)
is "eventually" smaller than some constant times g(x)
. (i.e. for large enough x
). The actual value of the constant doesn't matter; nor does the position of the "eventually" behavior - as long as it's true for a large enough x
and a large enough constant, you're covered.
You can add in constants, or anything with a smaller O, and it won't matter - it's all about which term grows the fastest, and which term dominates as x
grows.
Upvotes: 3
Reputation: 16364
Big-O notation is about how the usage of resources (time, memory) changes when the number of items changes. So, for example, if my computer can sort 10 items in 1 second, 20 items in 4 seconds, and 30 items in 9 seconds, then it is O(n²).
If I can sort those same items by hand in 10 seconds, 40 seconds, and 90 seconds, respectively, then I am still O(n²), but my constant factor is 10 times larger: The same size of problem takes me ten times as long as the computer.
Upvotes: 0