Reputation: 2053
I have 2 images im1
and im2
shown below. Theim2
picture is the same as im1
, but the only difference between them is the colors. im1
has RGB ranges of (0-255, 0-255, 0-255) for each color channel while im2
has RGB ranges of (201-255, 126-255, 140-255). My exercise is to reverse the added effects so I can restore im2
to im1
as closely as I can. I have 2 thoughts in mind. The first is to match their histograms so they both have the same colors. I tried it using histeq
but it restores only a portion of the image. Is there any way to change im2
's histogram to be exactly the same as im1
? The second approach was just to copy each pixel value from im1
to im2
but this is wrong since it doesn't restore the original image state. Are there any suggestions to restore the image?
Upvotes: 2
Views: 2661
Reputation: 104464
@sepdek below pretty much suggested the method that @NKN alluded to, but I will provide another approach. One more alternative I can suggest is to perform a colour correction based on a least mean squared solution. What this alludes to is that we can assume that transforming a pixel from im2
to im1
requires a linear combination of weights. In other words, given a RGB pixel where its red, green and blue components are shaped into a 3 x 1 vector from the corrupted image (im2
), there exists some linear transformation to get its equivalent pixel in the clean image (im1
). In other words, we have this relationship:
[R_im1] [R_im2]
[G_im1] = A * [G_im2]
[B_im1] [B_im2]
Y = A * X
A
in this case would be a 3 x 3 matrix. This is essentially performing a matrix multiplication to get your output corrected pixel. The input RGB pixel from im2
would be X
and the output RGB pixel from im1
would be Y
. We can extend this to as many pixels as we want, where pairs of pixels from im1
and im2
would establish columns along Y
and X
. In general, this would further extend X
and Y
to 3 x N
matrices. To find the matrix A
, you would find the least mean squared error solution. I won't get into it, but to find the optimal matrix of A
, this requires finding the pseudo-inverse. In our case here, A
would thus equal to:
Once you find this matrix A
, you would need to take each pixel in your image, shape it so that it becomes a 3 x 1 vector, then multiply A
with this vector like the approach above. One thing you're probably asking yourself is what kinds of pixels do I need to grab from both images to make the above approach work? One guideline you must adhere to is that you need to make sure that you're sampling from the same spatial location between the two images. As such, if we were to grab a pixel at... say... row 4, column 9, you need to make sure that both pixels from im1
and im2
come from this same row and same column, and they are placed in the same corresponding columns in X
and Y
.
Another small caveat with this approach is that you need to be sure that you sample a lot of pixels in the image to get a good solution, and you also need to make sure the spread of your sampling is over the entire image. If we localize the sampling to be within a small area, then you're not getting a good enough distribution of the colours and so the output will not look very nice. It's up to you on how many pixels you choose for the problem, but from experience, you get to a point where the output starts to plateau and you don't see any difference. For demonstration purposes, I chose 2000 pixels in random positions throughout the image.
As such, this is what the code would look like. I use randperm
to generate a random permutation from 1 to M
where M
is the total number of pixels in the image. These generate linear indices so that we can sample from the images and construct our matrices. We then apply the above equation to find A
, then take each pixel and apply a matrix multiplication with A
to get the output. Without further ado:
close all;
clear all;
im1 = imread('https://i.sstatic.net/GtgHU.jpg');
im2 = imread('https://i.sstatic.net/wHW50.jpg');
rng(123); %// Set seed for reproducibility
num_colours = 2000;
ind = randperm(numel(im1) / size(im1,3), num_colours);
%// Grab colours from original image
red_out = im1(:,:,1);
green_out = im1(:,:,2);
blue_out = im1(:,:,3);
%// Grab colours from corrupted image
red_in = im2(:,:,1);
green_in = im2(:,:,2);
blue_in = im2(:,:,3);
%// Create 3 x N matrices
X = double([red_in(ind); green_in(ind); blue_in(ind)]);
Y = double([red_out(ind); green_out(ind); blue_out(ind)]);
%// Find A
A = Y*(X.')/(X*X.');
%// Cast im2 to double for precision
im2_double = double(im2);
%// Apply matrix multiplication
out = cast(reshape((A*reshape(permute(im2_double, [3 1 2]), 3, [])).', ...
[size(im2_double,1) size(im2_double,2), 3]), class(im2));
Let's go through this code slowly. I am reading your images directly from StackOverflow. After, I use rng
to set the seed so that you can reproduce the same results on your end. Setting the seed is useful because it allows you to reproduce the random pixel selection that I did. We generate those linear indices, then create our 3 x N
matrices for both im1
and im2
. Finding A
is exactly how I described, but you're probably not used to the rdivide
/ /
operator. rdivide
finds the inverse on the right side of the operator, then multiplies it with whatever is on the left side. This is a more efficient way of doing the calculation, rather than calculating the inverse of the right side separately, then multiplying with the left when you're done. In fact, MATLAB will give you a warning stating to avoid calculating the inverse separately and that you should the divide operators instead. Next, I cast im2
to double
to ensure precision as A
will most likely be floating point valued, then go through the multiplication of each pixel with A
to compute the result. That last line of code looks pretty intimidating, but if you want to figure out how I derived this, I used this to create vintage style photos which also require a matrix multiplication much like this approach and you can read up about it here: How do I create vintage images in MATLAB? . out
stores our final image. After running this code and showing what out
looks like, this is what we get:
Now, the output looks completely scrambled, but the colour distribution more or less mimics what the input original image looks like. I have a few explanations on why this is the case:
im2
that maps to im1
. If there is more than one colour from im2
that maps to im1
, it is impossible for a linear multiplication with the matrix A
to be able to generate more than one kind of colour for im1
given a single pixel in im2
. Instead, the least mean-squared solution will try and generate a colour that minimizes the error and give you the best colour possible instead. This is probably way the face and other fine details of the image are obscured because of this exact reason.im2
is not completely clean. I can also see various spots of salt and pepper noise across all of the channels. One bad thing about this method is that if your image is subject to noise, then this method will not faithfully reconstruct the original image properly. Your image can only be corrupted by a wrong mapping of colours. Should there be any other type of image noise introduced, then this method will definitely not work as you are trying to reconstruct the original image based on a noisy image. There are pixels in the noisy image that were never present in the original image, so you'll have no luck getting it back to the way it was before!If you want to take a look at the histograms of each channel between the original image and the output image, this is what we get:
The code I used to generate the above figure was:
names = {'Red', 'Green', 'Blue'};
figure;
for idx = 1 : 3
subplot(3,2,2*idx - 1);
imhist(im1(:,:,idx));
title([names{idx} ': Image 1']);
end
for idx = 1 : 3
subplot(3,2,2*idx);
imhist(out(:,:,idx));
title([names{idx} ': Output']);
end
The left side shows the red, green and blue histograms for the original image while the right side shows the same histograms for the reconstructed image. You can see that the general shape more or less mimics the original image, but there are some spikes throughout - most likely attributed to quantization noise and the non-unique mapping between colours of both images.
All in all, this is the best that I could do, but I think that was the whole point of the exercise.... to show that it isn't possible.
For more information on how to perform colour correction, check out Richard Alan Peters' II Digital Image Processing slides on colour correction. This was what I started with, and the derivation of how to calculate A
can be found in his slides. Perhaps you can use some of what he talks about in your future work.
Good luck!
Upvotes: 4
Reputation: 1402
It seems that you need a scaling function to map the values of im2
to the values of im1
.
This is fairly simple and you could write a scaling function to have it available for any such case.
A basic scaling mapping would work as follows:
out_value = min_output + (in_value - min_input) * (outrange / inrange)
given that there is an input value in_value
that is within a range of values inrange=max_input-min_input
and the mapping results an output value out_value
within a range outrange=max_output-min_output
. We also need to take into account the minimum input and output range bounds (min_input
and min_output
) to have a correct mapping.
See for example the following code for a scaling function:
%
% scale the values of a matrix using a set of limits
% possible ways to use:
% y = scale( x, in_range, out_range) --> ex. y = scale( x, [8 230], [0 255])
% y = scale( x, out_range) --> ex. y = scale( x, [0 1])
%
function y = scale( x, varargin );
if nargin<2,
error([upper(mfilename),':: Syntax: y=',mfilename,'(x[,in_range],out_range)']);
end;
if nargin==2,
inrange=[min(x(:)) max(x(:))]; % compute the limits of the input variable
outrange=varargin{1}; % get the output limits from the arguments
else
inrange=varargin{1}; % get the input limits from the arguments
outrange=varargin{2}; % get the output limits from the arguments
end;
if diff(inrange)==0, % row or column vector matrix or scalar
% just do a clipping...
if x>=outrange(2),
y=outrange(2);
elseif x<=outrange(1),
y=outrange(1);
else
y=x;
end;
else
% actually scale the data
% using: out = min_output + (x-min_input) * (outrange / inrange)
y = outrange(1) + (x-inrange(1))*abs(diff(outrange))/abs(diff(inrange));
end;
This function gets a matrix of values and scales them to a desired range.
In your case it could be used as following (variable img
is the scaled im2
):
for i=1:size(im1,3), % for each of the input/output image channels
output_range = [min(min(im1(:,:,i))) max(max(im1(:,:,i)))];
img(:,:,i) = scale( im2(:,:,i), output_range);
end;
This way im2
is scaled to the range of values of im1
one channel at a time. Output variable img
should be the desired one.
Upvotes: 0