Reputation: 381
I want to multiply two polynomials fast in python. As my polynomials are rather large (> 100000) elements and I have to multiply lots of them. Below, you will find my approach,
from numpy.random import seed, randint
from numpy import polymul, pad
from numpy.fft import fft, ifft
from timeit import default_timer as timer
length=100
def test_mul(arr_a,arr_b): #inbuilt python multiplication
c=polymul(arr_a,arr_b)
return c
def sb_mul(arr_a,arr_b): #my schoolbook multiplication
c=[0]*(len(arr_a) + len(arr_b) - 1 )
for i in range( len(arr_a) ):
for j in range( len(arr_b) ):
k=i+j
c[k]=c[k]+arr_a[i]*arr_b[j]
return c
def fft_test(arr_a,arr_b): #fft based polynomial multuplication
arr_a1=pad(arr_a,(0,length),'constant')
arr_b1=pad(arr_b,(0,length),'constant')
a_f=fft(arr_a1)
b_f=fft(arr_b1)
c_f=[0]*(2*length)
for i in range( len(a_f) ):
c_f[i]=a_f[i]*b_f[i]
return c_f
if __name__ == '__main__':
seed(int(timer()))
random=1
if(random==1):
x=randint(1,1000,length)
y=randint(1,1000,length)
else:
x=[1]*length
y=[1]*length
start=timer()
res=test_mul(x,y)
end=timer()
print("time for built in pol_mul", end-start)
start=timer()
res1=sb_mul(x,y)
end=timer()
print("time for schoolbook mult", end-start)
res2=fft_test(x,y)
print(res2)
#########check############
if( len(res)!=len(res1) ):
print("ERROR");
for i in range( len(res) ):
if( res[i]!=res1[i] ):
print("ERROR at pos ",i,"res[i]:",res[i],"res1[i]:",res1[i])
Now, here are my approach in detail, 1. First, I tried myself with a naive implementation of Schoolbook with complexity O(n^2). But as you may expect it turned out to be very slow.
Second, I came to know polymul
in the Numpy library. This function is a lot faster than the previous one. But I realized this is also a O(n^2) complexity. You can see, if you increase the length k the time increases by k^2 times.
My third approach is to try a FFT based multiplication using the inbuilt FFT functions. I followed the the well known approach also described here but Iam not able to get it work.
Now my questions are,
Where am I going wrong in my FFT based approach? Can you please tell me how can I fix it?
Is my observation that polymul
function has O(n^2) complexity correct?
Please, let me know if you have any question. Thanks in advance.
Upvotes: 4
Views: 5577
Reputation: 14577
- Where am I going wrong in my FFT based approach? Can you please tell me how can I fix it?
The main problem is that in the FFT based approach, you should be taking the inverse transform after the multiplication, but that step is missing from your code. With this missing step your code should look like the following:
def fft_test(arr_a,arr_b): #fft based polynomial multiplication
arr_a1=pad(arr_a,(0,length),'constant')
arr_b1=pad(arr_b,(0,length),'constant')
a_f=fft(arr_a1)
b_f=fft(arr_b1)
c_f=[0]*(2*length)
for i in range( len(a_f) ):
c_f[i]=a_f[i]*b_f[i]
return ifft(c_f)
Note that there may also a few opportunities for improvements:
a_f = fft(arr_a, length)
)numpy.multiply
.numpy.fft.rfft
and numpy.fft.irfft
(instead of numpy.fft.fft
and numpy.fft.ifft
) for some extra performance boost.So an implementation for real-valued inputs may look like:
from numpy.fft import rfft, irfft
def fftrealpolymul(arr_a, arr_b): #fft based real-valued polynomial multiplication
L = len(arr_a) + len(arr_b)
a_f = rfft(arr_a, L)
b_f = rfft(arr_b, L)
return irfft(a_f * b_f)
- Is my observation that
polymul
function has O(n2) complexity correct?
That also seem to be the performance I am observing, and matches the available code in my numpy installation (version 1.15.4, and there doesn't seem any change in that part in the more recent 1.16.1 version).
Upvotes: 3