Reputation: 7222
I am trying to find a very fast way to find the next higher powers of 2 than a very large number (1,000,000) digits. Example, i have 1009, and want to find it's next higher powers of two which is 1024 or 2**10
I tried using a loop, but for large numbers this is very, very slow
y=0
while (1<<y)<1009:
y+=1
print(1<<y)
1024
While this works, it's slow for numbers larger than a million digits. Is there a faster algorithm to find the next higher powers of 2 than a number that is large?
ANSWERED BY @JonClements
using 2**number.bit_length() works perfectly. So this will work for large numbers as well. Thanks Jon.
Here's a code example from Jon's implementation:
2**j.bit_length()
1024
Here's a code example using the shift operator
2<<(j.bit_length()-1)
1024
Here is the time difference using the million length number, the shift operator and bit_length is significantly faster:
len(str(aa))
1000000
def useBITLENGTHwithshiftoperator(hm):
return 1<<hm.bit_length()-1<<1
def useBITLENGTHwithpowersoperator(hm):
return 2**hm.bit_length()
start = time.time()
l=useBITLENGTHwithpowersoperator(aa)
end = time.time()
print(end - start)
0.014303922653198242
start = time.time()
l=useBITLENGTHwithshiftoperator(aa)
end = time.time()
print(end - start)
0.0002968311309814453
Upvotes: 5
Views: 621
Reputation: 51933
I do not code in python but millions of digits implies bignums so:
try to look inside your bignum lib
It might return the number of words or bits used in O(1)
as some number representations need it to speed up other stuff. In such case you can obtain your answer in O(1)
for free.
As @JonClements suggested in a comments try bit_length()
and measure if it is O(1)
or O(log(n))
...
Your while
is O(n^3)
instead of O(n^2)
You are bitshifting from 1 over and over again in each iteration. Why not just shift last result by 1 bit again instead? Something like
for (y=0,yy=1;yy<1009;y++,yy<<=1);
using log2
might be faster
in case the bignum class you use have it implemented correctly after some number size threshold the log2(1009)
might be signifficantly faster. But that depends on the type of numbers you using and bignum implementation itself.
bit-shifting can be even faster
If you got some upper limit on your numbers you can use binary search converting your bitshifting into O(n.log2(n))
.
If not you can start bitshifting by 32 bits instead of by 1 when reached target size bitshift by 1 bit. Or even use more layers like 1024/128/16/1
bits. The complexity would be still O(n^2)
but the constant time would be ~1024 times smaller speeding up ~1024 times your code for big numbers...
Other option is to find the limit by shifting by 1 bit, then by 2 then by 4,8,16,32,64,... until result is bigger than your target number and from there either bitshift back or use binary search. This one would be O(n.log2(n))
even without any upper limit..
However all of these brings up much higher overhead and will slow down the processing of smaller numbers.
Constructing 2^(y-1) < x <= 2^y
might be possible to enhance too. For example by using bit shifting approach to find the y
you got your answer as byproduct for free. For example with floating point or fixed point numbers you can directly construct such number as computing exponent for 1 or by setting correct bit in the zero ... But for arbitrary numbers (where size of number is dynamic) i sthis much harder/slower. So all boils down what kind of bignums class you got and what values you use.
Upvotes: 1
Reputation: 10013
take 2^ceiling(logBase2(x)) - should work unless x is a power of 2. and you can check for that with: if x==ceiling(x)
.
Upvotes: 1