Reputation: 254
I have noticed that numpy.convolve
produces strange results on sufficiently complex signals. Here is a simple test example:
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
def conv_np(x, win):
return np.convolve(x, win, 'valid')
def conv_dot(x, win):
Z = np.asarray( [x[cnt:cnt+win.shape[0]] for cnt in range(x.shape[0]-win.shape[0]+1)] )
return np.dot(Z, win)
# test 1
x = np.repeat([0., 1., 0.], 300)
win = signal.hamming(50)
plt.subplot(2,1,1)
plt.plot( conv_np(x, win) - conv_dot(x, win) )
# test 2
x = np.random.random(size=(10000,))
win = x[4000:5000]
plt.subplot(2,1,2)
plt.plot( conv_np(x, win) - conv_dot(x, win) )
plt.show()
The plots show the difference between numpy.convolve
and direct implementation of convolution using dot produce. The top plot is for a simple signal and window (a step and a Hann window). The bottom plot is for a random signal and the window being just a portion of this signal.
So there is almost no difference between the dot product and numpy
implementations of the convolution for a simple signal/window, yet there is an enormous difference for complex signal/window.
Since dot product implementation can be considered a ground truth, I interpret this difference as numpy
's error. Please let me know if I am wrong or if there is a way to make numpy
produce the same results as dot product.
Upvotes: 3
Views: 1485
Reputation: 67427
To properly implement convolution with a dot product you need to reflect the kernel. If you redefine conv_dot
as:
def conv_dot(x, win):
Z = [x[cnt:cnt+win.size] for cnt in range(x.size-win.size+1)]
return np.dot(Z, win[::-1]) # the [::-1] is the only real change
You will find that the errors are now negligible as expected, at most 2e-13
in a trial run I did with your second example.
Upvotes: 2