Reputation: 632
I am trying to learn how to calculate the Big O Notation of a function. I don't really understand when we should add these "N" Numbers and when not
I will write an example using pseudocode so you get my idea
define function x(n){
for( i = 1 ; i < n; i++){
for ( j = 1 ; j < n; j++){
// Statement
}
}
}
Here, the first for loop takes n time to execute, I understood that, the second for loop also takes n time to execute. So because this is a nested for loop, so one for loop inside another one, we multiply the values, so we get n^2, that means that the Big O Notation would look like this == > O(n^2)
Now, if we would have a loop near each other like this, we would add the n values. e.g :
define function x(n){
for ( i = 1 ; i < n ; i++){
// Statement
}
for ( j = 1 ; j < n ; j++){
// Statement
}
}
Now, the first function takes n time to execute and the second function also takes n time to execute. So, we add the values and we get 2n, so the Big O Notation would look like this == > O(n)
Now, here is my problem :
define function x(n){
p = 0
for( i = 1; i < n ; i *= 2){
p++;
}
for( j = 1 ; j < p ; j *= 2){
// Statement
}
}
So, here the first function takes in log2(n), which is clear to me, I understood why. After that function, the variable p will be log2(n). So, the second for loop will need log2(p) time, which is log2(log2(n)).
Now, I watched a youtube video that said that we don't add the values and that the Big O Notation of the function remains the time needed for the second for loop which is log2(p), so O(log2(log2(p))). So this is my problem, why don't we add the time needed for the first loop with the time needed for the second loop like we used do to. So we would have something like log2(n) + log2(log2(n)) ? Why do we have to say that the Big O Notation of this function is the time needed for the second loop, we do we ignore the first loop at the end ?
Upvotes: 0
Views: 33
Reputation: 1490
It is true that the first loop takes O(log(n)), and therefore p is approximately log(n).
Thus the second loop has a complexity of log2(log2(n)), which grows slower asymptoticially than the first loop.
If you are asked about the complexity of the whole code, it is dominated by the first loop, so it is log2(n).
The video you're referring to asks only about the second loop, which is log2(log2(n)).
Upvotes: 1