ubaabd
ubaabd

Reputation: 435

Is O(1/2 X log N) = O(logN)

Suppose I have an algorithm with memory requirement of logN+1 where N is the size of the problem (number of bits to process). I propose a 2nd version that reduces this memory requirement to (logN)/2+1. I have learnt that constants are ignored in Big-O analysis, so both the algorithm versions have complexity of O(logN).

Now, if I calculate the memory that I saved using the 2nd version of the algorithm, I get

Memory saved at N = M(N) = 1 - [(logN)/2+1]/[logN+1]
lim N→∞ M(N) = 1/2

which shows that asymptotically I will always save 50% of the memory. I am confused why I am unable to see this gain in Big-O analysis?

My second question is: If my understanding about Big-O notation is wrong, what is a proper way of highlighting the memory saved in 2nd version of the algorithm?

Upvotes: 5

Views: 1389

Answers (3)

templatetypedef
templatetypedef

Reputation: 373052

Remember that big-O notation does not include constant factors. The functions f(n) = n and g(n) = 10100n are both O(n), though f(n) is a much, much smaller function than g(n).

Your analysis is correct - if you can make the space usage (log n) / 2 - 1, then you will (in the limit) be halving the amount of memory required. However, this will not show up in a big-O analysis, since big-O ignores the constant factors. As mentioned in some of the other answers, big-O notation captures long-term growth rates, and although constants might tell you more about the absolute amount of space used, constants don't control the long-term growth rate of the space usage.

If you want to do a more precise analysis, you could give the exact memory usages before and after and then say that you have reduced the memory usage by 50%. Many papers on algorithms and data structures do actually include the constant factors and mention that they're getting a constant speedup. For example, the Cholesky factorization algorithm and Gaussian elimination both give O(n3) algorithms for solving linear systems, but when the Cholesky factorization can be used it does so with about 50% fewer operations. Most textbooks covering these topics will mention that although both algorithms are O(n3), the former is preferable to the latter when it's possible to use it.

Hope this helps!

Upvotes: 5

Lord Peter
Lord Peter

Reputation: 3501

Big-O is of great use in determining how an algorithm or implementation will scale. Improvements in the constant (in your example case it is one half) are still useful, but when faced with an order of magnitude increase in the problem size, they are of little effect.

Upvotes: 1

Pete Kirkham
Pete Kirkham

Reputation: 49331

Big-O does not take into account constant factors. It is only a measure of how the algorithm scales - so anything which grows proportional to log N is O(log N). It's a relative measure, like saying two people had a 10% pay rise, even though one person's pay went from 10,000 to 12,000 and the other went from 1,000,000 to 1,200,000. You wouldn't expect to know what someone's total pay is if you're told they got a rise of 10% every year, so don't expect to know the total cost of an algorithm if you know it grows O(log N).

If the second version of the algorithm uses half the memory but has the same scaling behaviour, then simply say that it uses half the memory.

Upvotes: 2

Related Questions