Reputation: 37
I was going through some lectures on time complexity & on this link https://www.youtube.com/watch?v=__vX2sjlpXU author explains at 4:50 that constants do matter in a lot of situations when they have small input sizes. Kindly explain
Upvotes: 1
Views: 1895
Reputation: 976
Well, When i was studying about algorithm and their complexity, our professor briefly explained that constants matter a lot. In complexity theory there are two main notations to explain complexity. First one is BigO and second one is tild notation.
Suppose you implement a priority queue(using heap), which takes 2lgN
compares for removing a maximum element and 1 + logN
for insert for each item. Now what people do actually that they remove 2logN
and write it as O(logN)
but that's not right because 1+logN
was required only to insert an element and when you remove an element then you need to rebalance the queue(sink and swim) functions.
If you write ~2logN
as O(logN)
then that means you are counting only complexity of one function,either swim or sink.
As a reference i will add that at some top ranking universities , mostly professors use ~
notation.
BigO can be harmful. A book written by Robert sedgewick and Kevin Wayne uses ~
and explains also why he prefers that.
Upvotes: 0
Reputation: 14389
Let's say there are two algorithms with actual complexity of 100n and 2n2, so they are O(n) and O(n2). For n = 2 they will take 200 and 8 CPU cycles for execution, respectively. But for values of n more than 50, the 100n algorithm will always perform better than 2n2 algorithm.
This way we see that for smaller inputs, Big O may not be a good judge of algorithms and constants play a significant role, especially when they are quite big compared to the input.
Similarly, you can understand the result when dealing with time complexities of 100 + n and 2 + n2 like cases. For values of n that are not big enough to overtake the influences of the constants, the actual execution times may end up being governed by the constants instead of the input value n.
Upvotes: 3
Reputation: 25903
For the mathematical term of time complexity it does not matter.
However if you have big constants your program could, even if it has a good complexity, be slower than a program with bad complexity. Which is kind of obvious, imagine doing a sleep for 1 hour
, your program needs long, a big constant. But its complexity class could be good since constant do not matter.
Why don't they matter? Because for every program with a worse complexity there will be an input (big inputs) for which they will get slower at some time.
Here is an example:
Good complexity O(1)
, slow anyway:
void method() {
sleep(60 * 60 * 1_000); // 1 hour
}
Worse complexity O(n)
, faster for small inputs:
void method(int n) {
for (int i = 0; i < n; i++) {
sleep(1_000); // 1 second
}
}
However if you input n > 60 * 60
the second method will get slower.
You shouldn't confuse time complexity with the actual measurable running time, it's a huge difference.
Time complexity is about asymptotic bounds, see the definition for f in O(g)
:
Upvotes: 0