Reputation: 689
What is the time complexity of Binary to Decimal Conversion ?
I think If there are K bits in a binary number, then TC would be O(K), but, as we always tend to go for N bits in a binary number, then TC would be O(N).
How i got this ,as i think like how many digits a decimal number would include to represent N bit binary numbers.
Then, it is 10^k - 1 = 2^N - 1 => K = N Log2 base10 => which gives TC = O(N).
Can someone clarify this ?
Also, Is there is any chance to reduce this Time complexity ?
Upvotes: 1
Views: 235
Reputation: 676
A decimal digit can represent log(10)/log(2) binary digits, approximately 3.22. But you want binary to decimal, so assume you need 4 bits per decimal digit. In any event, your output size is O(N) where N is your input size.
Now, that tells you very little about the time complexity of the unstated algorithm.
For small N, we can just do a table look up for O(1), but what about when N is large, e.g. 10^6?
A naive algorithm will process one bit of input at a time, doing a multiply/add on the output decimal. That's O(N) on the input, with each step costing O(N) because that's the cost to shift/add on the output. So O(N^2) is your time complexity.
Upvotes: 0
Reputation: 77073
Starting point.
-1 was present both left and right, so we added 1. we have:
10^k = 2^N now.
To get K from 10^k, we put both left and right into log of base 10. Since logarithm is the power you need to put the base to get the parameter, log(10^k) of base 10 will yield K. On the other hand, N Log2 base10 is the value we reach because of the logarithmization.
Upvotes: 1