Reputation: 1627
I tried implementing image filtering in spectrum based on this OpenCV example from the docs and copied for convenience here:
void convolveDFT(InputArray A, InputArray B, OutputArray C)
{
C.create(abs(A.rows - B.rows)+1, abs(A.cols - B.cols)+1, A.type());
Size dftSize;
// calculate the size of DFT transform
dftSize.width = getOptimalDFTSize(A.cols + B.cols - 1);
dftSize.height = getOptimalDFTSize(A.rows + B.rows - 1);
// allocate temporary buffers and initialize them with 0's
Mat tempA(dftSize, A.type(), Scalar::all(0));
Mat tempB(dftSize, B.type(), Scalar::all(0));
// copy A and B to the top-left corners of tempA and tempB, respectively
Mat roiA(tempA, Rect(0,0,A.cols,A.rows));
A.copyTo(roiA);
Mat roiB(tempB, Rect(0,0,B.cols,B.rows));
B.copyTo(roiB);
// now transform the padded A & B in-place;
// use "nonzeroRows" hint for faster processing
dft(tempA, tempA, 0, A.rows);
dft(tempB, tempB, 0, B.rows);
// multiply the spectrums;
// the function handles packed spectrum representations well
mulSpectrums(tempA, tempB, tempA);
// transform the product back from the frequency domain.
// Even though all the result rows will be non-zero,
// you need only the first C.rows of them, and thus you
// pass nonzeroRows == C.rows
dft(tempA, tempA, DFT_INVERSE + DFT_SCALE, C.rows);
// now copy the result back to C.
tempA(Rect(0, 0, C.cols, C.rows)).copyTo(C);
}
I used Lena image as A(512x512) and an identity filter(all entries set to 0 except the center one) as B(41x41).
It seems that the bottom and the right part of the image has been cropped. Also, while not visible here due to SO formatting the filtered image is smaller than the original (because of the function's first line).
How do could I modify the code so that it filters the image just like the filter2D function would? So that in this case the result would be the original image.
Upvotes: 3
Views: 2006
Reputation: 14579
The size of the convolution result should be A.cols + B.cols - 1
by A.rows + B.rows - 1
(just like the arguments of the getOptimalDFTSize
calls). So, to get the full convolution result, the first line of the example should be changed to:
C.create(A.rows + B.rows - 1, A.cols + B.cols - 1, A.type());
This should then give you a resulting image that is slightly larger than the original one, including a border all around which correspond to the tail of the convolution (where the filtering ramps up and down).
filter2D
on the other hand does not return a full convolution as the output is limited to an image that is the same size as the original one, removing the convolution tails. To do this, filter2D
assumes a linear kernel which introduces a shift of (B.cols - 1)/2
along the columns and (B.rows - 1)/2
along the rows. The filtered image can thus be extracted with the following:
C.create(A.rows, A.cols, A.type());
...
tempA(Rect((B.cols-1)/2, (B.rows-1)/2, A.cols, A.rows).copyTo(C);
Upvotes: 1