Reputation: 4928
I understand that O(1) indicates an algorithm will take a constant amount of execution time regardless of the input dimensions. I also understand that O(N) indicates a linear increase in execution time proportional to the input dimension size.
However, I only know this due to memorizing their definitions. I have no intuition in interpreting O(1) and instead just recall that this means constant time execution. I'm curious how I can understand intuition when reading big O notation.
So for constant time execution O(1), what does the 1 represent? Why not have it be O(2)? 2 is also a constant that is independent of input size N.
Upvotes: 3
Views: 1044
Reputation:
You are free to use O(2)
, just like you can use O(33652 n²- log n)
.
At the risk of seeming weird.
Upvotes: 1
Reputation: 51093
The notation O(...)
means a set of functions. Roughly speaking, O(f(n))
is the set of functions which don't grow asymptotically faster than f
does.
The constant function f(n) = 1
doesn't grow at all, and neither does the constant function f(n) = 2
, so neither grows asymptotically faster than the other. Also, any other function grows asymptotically faster than 1
if and only if it grows asymptotically faster than 2
. So a function is in the set O(1)
if and only if it is in the set O(2)
, meaning they are the same set.
This means you can write O(2)
and it is strictly correct, but it is simpler (and hence conventional) to write O(1)
. You can think of this a bit like solving a maths problem where the answer is a fraction; you are expected to write the answer in its simplest form. Strictly speaking, 6/4 is equal to 3/2, but it is conventional to write 3/2.
Upvotes: 6
Reputation: 7714
Because when you are evaluating the complexity of a function, what matters for you is the variables. If you find any constant, you put it off.
For example:
for(int i = 0; i < 2 * N; i++){ print("Hello"); }
Your complexity is O(2N), but as what matters for you is the VARIABLES, you take the constants out of the equation, leaving O(N).
Now suppose this:
int a = b + c + d;
The complexity is obviously constant, but the number of operations isn't only 1, let's say it is 3 (2 operations and 1 attribution). Then you have O(3). We can safely say that O(3) = O(3 * N^0). We follow the same procedure of cutting of the constants, leaving us with O(N^0) = O(1).
Just to clarify, when we are evaluating complexities, we imagine that the variable will assume very big values, such big values that any multiplying constant wouldn't make a difference when the variable goes to infinite, that's why we put it off. The same thing holds for additive constants, for example O(5N + 3) = O(N).
Upvotes: 2