Reputation: 21
As a beginner programmer, I've always had trouble noticing the complexity of sometimes simple code. The code in question is:
k = 1;
while (k <= n){
cout << k << endl;
k = k * 2;
}
At first, I thought the complexity was O(log n) due to the k = k*2 line, and I ran the code as a test and kept track of how many times it looped in regard to the size of n, which was relatively low for even large sizes of n. I also am pretty sure that it is not O(n) because it would have taken much longer to run, but I could be wrong there, as that is why I'm asking the question.
Thanks!
Upvotes: 1
Views: 127
Reputation: 101
In your example k doesn’t increase by 1 (k++), it doubles every time it runs which traverses the loop in log(n) time. Remember that logarithms are the opposite operation of exponentiating something. Logarithms appear when things are constantly halved or doubled such as k in your example
Upvotes: 2
Reputation: 5270
Example: 1~9
1
/ \
2 3
/ \ / \
4 5 6 7
/ \
8 9
The deepth of the tree(or focus on 1 2 4 8...) is alsways ⌊O(logn)⌋+1, so the complexity is O(log n)
Upvotes: 1
Reputation: 161
As you suggested, the provided example would be O(log n) due to the fact that k is being multiplied by a constant regardless of the size of n. This behavior can also be observed by comparing the necessary traversals of two very simple test cases.
For instance, if n = 10
, it's easy to demonstrate that the program would then iterate through the loop 6 times.
Yet if you double the value of n so that n = 20
, the program will only require one more traversal, whereas you would expect a program that is O(n) to require roughly twice as many traversals as the original test case.
Upvotes: 1
Reputation: 301
It is O(log n). Each Iteration, k doubles - which means that in (log n) iterations it will be equal or greater than n.
Upvotes: 2