rando
rando

Reputation: 377

why does np.convolve shift the resulted signal by 1

I have the following two signals:

X0 = array([0., 0., 0., 0., 0., 1., 0., 0., 0., 0.])
rbf_kernel = array([2.40369476e-04, 4.82794999e-03, 4.97870684e-02, 2.63597138e-01,
       7.16531311e-01, 1.00000000e+00, 7.16531311e-01, 2.63597138e-01,
       4.97870684e-02, 4.82794999e-03])

I tried to convolve the two signals using np.convolve(X0, rbf_kernel, mode='same') but the resulted convolution is shifted by one to the right as shown below. Green, orange, blue curves are X0, rbf_kernel, and the result from the last command line respectively. I expect to see the maximum convolution when the two convoluted signals were matched (i.e, at point 5) but that did not happen.

enter image description here

Upvotes: 4

Views: 2031

Answers (1)

Girish Hegde
Girish Hegde

Reputation: 1515

The result is shifted because of padding used for same convolution. Convolution is a process of sliding flipped kernel on input and taking dot product at each step. For valid convolution kernel should overlap at every stride fully hence output size will be n - m + 1 (n - len(input), m - len(kernel) assuming m <= n). For same convolution output size will be max(m, n) to achieve that we need to apply (m - 1) zero padding on input and then perform valid convolution.

In your example n = m = 10 and same convolution output size will be max(10, 10) = 10. It requires zero padding of m - 1 = 9 which is 5 left zero padding and 4 right zero padding. Padded input(X0) looks like :

padded_x = [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] with length 19.

flipped kernel = [4.82794999e-03 4.97870684e-02 2.63597138e-01 7.16531311e-0, 1.00000000e+00 7.16531311e-01 2.63597138e-01 4.97870684e-02 4.82794999e-03 2.40369476e-04]

So on convolution output will be maximum at 6th step(starting from 0)

Here's a sample SAME convolution code:

import numpy as np 
import matplotlib.pyplot as plt

def same_conv(x, k):
    if len(k) > len(x):
        # consider longer as x and other as kernel
        x, k = k, x

    n = x.shape[0]
    m = k.shape[0]

    padding   = m - 1
    left_pad  = int(np.ceil(padding / 2))
    right_pad = padding - left_pad

    x = np.pad(x, (left_pad, right_pad), 'constant')
    # print(len(x))

    out = []

    # flip the kernel
    k = k[::-1]
    # print(k)
    
    for i in range(n):
        out.append(np.dot(x[i: i+m], k))

    return np.array(out)

X0 = np.array([0., 0., 0., 0., 0., 1., 0., 0., 0., 0.])
rbf_kernel = np.array([2.40369476e-04, 4.82794999e-03, 4.97870684e-02, 2.63597138e-01,
                       7.16531311e-01, 1.00000000e+00, 7.16531311e-01, 2.63597138e-01,
                       4.97870684e-02, 4.82794999e-03])
convolved = same_conv(X0, rbf_kernel)

plt.plot(X0)
plt.plot(rbf_kernel)
plt.plot(convolved)
plt.show()

which results in the same shifted output as yours.

Upvotes: 3

Related Questions