Reputation: 792
I have the following code that applies Gaussian filter to an arbitrary image 25 times. Every time the filter is applied, the resulting image is normalized.
kernel = np.array([[1.0,2.0,1.0],
[2.0,4.0,2.0],
[1.0,2.0,1.0]])
for i in range(25):
# handle each component separately
img[:,:,0] = convolve(img[:,:,0], kernel, mode='same')
img[:,:,1] = convolve(img[:,:,1], kernel, mode='same')
img[:,:,2] = convolve(img[:,:,2], kernel, mode='same')
img = img / 16 # normalize
What is the best way to reverse this process? I.e. if I have a blurred image (result of executing the code above) and want to get the original.
Edit 1:
Example
Edit 2:
Attempt at reproducing Cris's answer
I installed dipimage_2.9
. I am using macOS 10.14.2
with Matlab R2016a
.
It took me a while to get how to specify boundary conditions for convolutions, since DIPimage's convolve.m
only accepts image_in
and kernel
args. I ended up using dip_setboundary
for that (DIPimage User Manual section 9.2).
Here's the code (I simply added dip_setboundary
accordingly and the origin of crop region for cut
):
% Get data
a = readim('https://i.sstatic.net/OfSx2.png'); % using local path in real code
a = a{1}; % Keep only red channel
%% Create kernel
kernel = [1.0,2.0,1.0
2.0,4.0,2.0
1.0,2.0,1.0] / 16;
tmp = deltaim((size(kernel)-1)*25+1);
dip_setboundary('add_zeros');
for ii=1:25
tmp = convolve(tmp,kernel);
end
kernel = tmp;
%% Apply convolution
dip_setboundary('periodic');
b = convolve(a,kernel);
dip_setboundary('symmetric'); % change back to default
% Find inverse operation
% 1- pad stuff so image and kernel have the same size
% we maintain the periodic boundary condition for image b
b = repmat(b,ceil(imsize(kernel)./imsize(b)));
kernel = extend(kernel,imsize(b));
% 2- apply something similar to Wiener deconvolution
c = real(ift(ft(b)/(ft(kernel)+1e-6))); % Not exactly Wiener, but there's no noise!
% 3- undo padding
c = cut(c,imsize(a), [0, 0]); % upper left corner
Here's the resulting image c
:
Upvotes: 1
Views: 3015
Reputation: 60504
Let's look at the code in the question for a single channel, assuming img
is a gray-scale image -- everything here can be applied per channel, so we don't need to repeat everything three times:
for i in range(25):
img = ndimage.convolve(img, kernel)
img = img / 16 # normalize
We'll get to undoing the convolution in a minute. First let's simplify the operation applied.
The above is identical (within numerical precision) to:
kernel = kernel / 16 # Normalize
for i in range(25):
img = ndimage.convolve(img, kernel)
This is true as long as img
is not some integer type where clipping and/or rounding occurs. In general, with *
the convolution, and C
some constant,
g = C (f * h) = f * (C h)
Next, we know that applying the convolution 25 times is the same as applying the convolution once with a composite kernel,
g = (((f * h) * h) * h) * h = f * (h * h * h * h)
How can we obtain the composite kernel? Applying a convolution to an image that is all zeros and with a 1 in the middle pixel yields the kernel again, so that
delta = np.zeros(kernel.shape)
delta[delta.shape[0]//2, delta.shape[1]//2] = 1
kernel2 = ndimage.convolve(delta, kernel)
kernel2 == kernel # is true everywhere, up to numerical precision
Thus, the following code finds the kernel that is used to smooth the image in the question:
kernel = np.array([[1.0,2.0,1.0],
[2.0,4.0,2.0],
[1.0,2.0,1.0]]) / 16
delta = np.zeros(((kernel.shape[0]-1)*25+1, (kernel.shape[1]-1)*25+1))
delta[delta.shape[0]//2, delta.shape[1]//2] = 1
for i in range(25):
delta = ndimage.convolve(delta, kernel)
kernel = delta
This kernel will be very similar to a Gaussian kernel due to the central limit theorem.
Now we can obtain the same output as in the question with a single convolution:
output = ndimage.convolve(img, kernel)
The process of inverse filtering is called deconvolution. In theory this is a very trivial process, but in practice it is very difficult due to noise, an inexact knowledge of the kernel, etc.
We know that we can compute the convolution through the Fourier domain:
output = np.convolve(img, kernel, mode='wrap')
is the same as
output = np.real(np.fft.ifft2( np.fft.fft2(img) * np.fft.fft2(np.fft.ifftshift(kernel)) ))
(assuming that kernel
is the same size as img
, we'd typically have to pad it with zeros first). Any differences between spatial and frequency domain operation results are caused by the how the image is extended past its boundary when using convolve
. The Fourier method assumes a periodic boundary condition, this is why I used the 'wrap'
mode for the convolution here.
The inverse operation is simply a division in the Fourier domain:
img = np.real(np.fft.ifft2( np.fft.fft2(output) / np.fft.fft2(np.fft.ifftshift(kernel)) ))
For this to work, we need to know the exact values of kernel
, and there should be no noise added in the process. For output
computed as above, this should give the exact result in theory
However, some kernels could be exactly zero for some frequency components (i.e. np.fft.fft2(np.fft.ifftshift(kernel))
contains zeros). These frequencies cannot be recovered, and dividing by 0 will lead to NaN values that will spread through the whole image in the inverse transform, the inverse image will be all NaNs.
For a Gaussian kernel there are no zeros, so this should not happen. However there will be many frequencies that are very nearly zero. The Fourier transform of output
will therefore also have very small value for these elements. The inverse process then is the division of a very small value by another very small value, causing numerical precision issues.
And you can see how this process, if there is only a very little bit of noise, will greatly enhance this noise, such that the output is given almost entirely by this noise.
The Wiener deconvolution includes regularization to prevent these issues with noise and numerical imprecision. Basically, you prevent the division by very small numbers by adding a positive value to the Fourier transform of kernel
. Wikipedia has a good description of Wiener deconvolution.
I'm using MATLAB with DIPimage 3 here to do a quick demo (much less effort for me than firing up Python and figuring out how to do all of this there). This is the code:
% Get data
a = readim('https://i.sstatic.net/OfSx2.png');
a = a{1}; % Keep only red channel
% Create kernel
kernel = [1.0,2.0,1.0
2.0,4.0,2.0
1.0,2.0,1.0] / 16;
tmp = deltaim((size(kernel)-1)*25+1);
for ii=1:25
tmp = convolve(tmp,kernel,'add zeros');
end
kernel = tmp;
% Apply convolution
b = convolve(a,kernel,'periodic');
% Find inverse operation
% 1- pad stuff so image and kernel have the same size
% we maintain the periodic boundary condition for image b
b = repmat(b,ceil(imsize(kernel)./imsize(b)));
kernel = extend(kernel,imsize(b));
% 2- apply something similar to Wiener deconvolution
c = ift(ft(b)/(ft(kernel)+1e-6),'real'); % Not exactly Wiener, but there's no noise!
% 3- undo padding
c = cut(c,imsize(a),'top left');
This is the output, the top third is the input image, the middle third is the blurred image, the bottom third is the output image:
Important to note here is that I used a periodic boundary condition for the initial convolution, which matches what happens in the Fourier transform. Other boundary conditions will cause artefacts in the inverse transform near the edges. Because the kernel size is larger than the image, the whole image will be one big artefact, and you won't be able to recover anything. Also note that, to pad the kernel with zeros to the size of the image, I had to replicate the image, since the kernel is larger than the image. Replicating the image matches again the periodic boundary condition imposted by the Fourier transform. Both of these tricks could be ignored if the input image were much larger than the convolution kernel, as you would expect in a normal situation.
Also note that, without the regularization in the deconvolution, the output is all NaN, because we are dividing very small values by very small values. The Fourier transform of the kernel has a lot of near-zeros in it because the blurring is quite heavy.
Finally, note that adding even a small amount of noise to the blurred image will make it impossible to deconvolve the image in a way that the text can be read. The inverse transform will look very nice, but the text strokes will be distorted enough for the letters to no longer be easily recognizable:
The code above uses DIPimage 3, which doesn't yet have an official binary to install, it needs to be build from source. To run the code using DIPimage 2.x a few changes are necessary:
The boundary condition must be set using dip_setboundary
, instead of being able to pass it directly to the convolve
function. The strings 'add zeros'
and 'periodic'
are the boundary condition.
The ft
and ift
functions use a symmetric normalization, each multiplies their output by 1/sqrt(prod(imsize(image)))
, whereas in DIPimage 3 the normalization is the more common multiplication by 1/prod(imsize(image))
for ift
, and 1
for ft
. This means that the Fourier transform of kernel
must be multiplied by sqrt(prod(imsize(kernel)))
to match the result of DIPimage 3:
c = real(ift(ft(b)/((ft(kernel)*sqrt(prod(imsize(kernel))))+1e-6)));
Upvotes: 2
Reputation: 51653
You cant - blurring looses information by averaging.
Consider a 1-dim example:
[1 2 1] on [1,2,3,4,5,6,7] assuming 0 for missing "pixel" on convolution
results in [4, 8, 12, 16, 20, 24, 20]
. The 8 could come from [1,2,3]
but also from [2,1,4]
- so you already have 2 different solutions. Wich ever you take , influences whatever values might have been the source for 12.
This is an overly simplyfied example - you can solve this - but in image processing you might deal with 3000*2000 pixels and 2d-convolutions by 3x3,5x5,7x7,... matrices making the reversing impractical.
Make this two dimensional you might be able to solve it mathematically - but more often then not you get a miriad of solutions and very complex constrains to solve it if you apply this to a 2-dimensional convolution and 3000*2000 pixels.
Upvotes: 0