honeymoon
honeymoon

Reputation: 2520

RGB color correction

I am trying to do a color correction of my acquire images by calibrating the RGB values to an image of a color checker with 24 values of the same image session.

My idea was to fit an optimal polynomial function to the RGB values of the 24 color checker points (average) with the real known corresponding values. I am doing a GridSearch to find the optimal degree and then to use that function to correct the RGB values of my images.

# This are the known values of the color checkers (ground truth)
vector = [[115,82,68],[194,150,130],[98,122,157],[87,108,67],[133,128,177],[103,189,170],[214,126,44],[80,91,166],[193,90,99],[94,60,108],[157,188,64],[224,163,46],[56,61,150],[70,148,73],[175,54,60],[231,199,31],[187,86,149],[8,133,161],[243,243,242],[200,200,200],[160,160,160],[122,122,121],[85,85,85],[52,52,52]]
# This are the average RGB values after image acquisition of the color checker plate
X = [[42,20,31],[147,67,119],[40,47,142],[36,41,36],[76,53,158],[49,107,165],[162,47,33],[30,34,168],[167,28,66],[32,16,62],[101,121,74],[172,80,38],[17,26,164],[35,92,74],[168,20,32],[165,158,61],[165,37,146],[25,72,174],[176,175,179],[173,172,176],[137,121,173],[71,63,126],[33,31,63],[15,14,28]]

If I plot the data it looks like this: enter image description here

After a GridSearch, a degree of 2 should be optimal

({'linearregression__normalize': True, 'polynomialfeatures__degree': 2, 'linearregression__fit_intercept': True})

from sklearn.grid_search import GridSearchCV
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline

def PolynomialRegression(degree=2, **kwargs):
    return make_pipeline(PolynomialFeatures(degree),
                         LinearRegression(**kwargs))

param_grid = {'polynomialfeatures__degree': np.arange(21),
              'linearregression__fit_intercept': [True, False],
              'linearregression__normalize': [True, False]}

grid = GridSearchCV(PolynomialRegression(), param_grid, cv=7)
grid.fit(X, vector)
print grid.best_params_

If I try to correct a part of my color checker image, I don't get the expected RGB values:

import cv2
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression

example = cv2.imread('cc_part12.png')
poly = PolynomialFeatures(degree=2)
X_ = poly.fit_transform(X)
clf = linear_model.LinearRegression()
t = clf.fit(X_, vector)

img = example.copy()
height, width, depth = img.shape
print height, width, img.shape
for i in range(0, height):
    for j in range(0, (width)):
        predict_ = poly.fit_transform(example[i, j])
        img[i,j] = clf.predict(predict_)[0]
cv2.imwrite('out.png', img)

Test image: Color checker test

Output image: Output after prediction

The average RGB values of the left rectangle should be [115,82,68] and of the right rectangle [194,150,130] but the value differ quite a lot ~[78,69,88] and ~[151,135,174].

Any suggestion would be very appreciated, I also would like to know how to improve the prediction. Iterating over each pixel is not very efficient.

Edit

As Kel Solaar suggested, I tried to converted the ground truth RGB and the RGB values of the color checker image into a linear space by using the oetf_reverse_sRGB:

# First I scale the RGB values from 0 to 1 
a = []
for i in X:
    a.append((i[0]/255, i[1]/255, i[2]/255))
b = []
for j in y:
    b.append((j[0]/255, j[1]/255, j[2]/255))

# Convert to linear scale
z = oetf_reverse_sRGB(a)
q = oetf_reverse_sRGB(b)

Then I applied the color fitting function:

c = first_order_colour_fit(z, q)

Which gives me the 3x3 Colour fitting matrix:

[[ 0.68864569 -0.21360123  0.0493316 ]
[-0.03219473  0.48606749 -0.02698569]
[-0.05330554  0.01785467  0.79400138]]

If I understood correctly, I have to multiply the RGB values of my image with:

R2 = R*0.68864569
G2 = G*0.48606749
B2 = B*0.79400138

I have done this for a small part of my color checker image as before, unfortunately it gives me this output: enter image description here

The average value of the first box is (30,10,21) and from the second (103,33,87) which is far away from the expected values (115,82.68) and (194,150,130).

Maybe I misunderstood something?

Upvotes: 2

Views: 5791

Answers (2)

Kel Solaar
Kel Solaar

Reputation: 4060

The values of your ground truth colour rendition chart are non-linearly encoded with the sRGB Opto-Electrical Transfer Function (OETF), i.e. they are in gamma space. This will likely get in your way when performing linear regression and linear algebra.

Colour performs first order colour fitting as follows, i.e multivariate linear regression:

def first_order_colour_fit(m_1, m_2):
    """
    Performs a first order colour fit from given :math:`m_1` colour array to
    :math:`m_2` colour array. The resulting colour fitting matrix is computed
    using multiple linear regression.

    The purpose of that object is for example the matching of two
    *ColorChecker* colour rendition charts together.

    Parameters
    ----------
    m_1 : array_like, (3, n)
        Test array :math:`m_1` to fit onto array :math:`m_2`.
    m_2 : array_like, (3, n)
        Reference array the array :math:`m_1` will be colour fitted against.

    Returns
    -------
    ndarray, (3, 3)
        Colour fitting matrix.

    Examples
    --------
    >>> m_1 = np.array(
    ...     [[0.17224810, 0.09170660, 0.06416938],
    ...      [0.49189645, 0.27802050, 0.21923399],
    ...      [0.10999751, 0.18658946, 0.29938611],
    ...      [0.11666120, 0.14327905, 0.05713804],
    ...      [0.18988879, 0.18227649, 0.36056247],
    ...      [0.12501329, 0.42223442, 0.37027445],
    ...      [0.64785606, 0.22396782, 0.03365194],
    ...      [0.06761093, 0.11076896, 0.39779139],
    ...      [0.49101797, 0.09448929, 0.11623839],
    ...      [0.11622386, 0.04425753, 0.14469986],
    ...      [0.36867946, 0.44545230, 0.06028681],
    ...      [0.61632937, 0.32323906, 0.02437089],
    ...      [0.03016472, 0.06153243, 0.29014596],
    ...      [0.11103655, 0.30553067, 0.08149137],
    ...      [0.41162190, 0.05816656, 0.04845934],
    ...      [0.73339206, 0.53075188, 0.02475212],
    ...      [0.47347718, 0.08834792, 0.30310315],
    ...      [0.00000000, 0.25187016, 0.35062450],
    ...      [0.76809639, 0.78486240, 0.77808297],
    ...      [0.53822392, 0.54307997, 0.54710883],
    ...      [0.35458526, 0.35318419, 0.35524431],
    ...      [0.17976704, 0.18000531, 0.17991488],
    ...      [0.09351417, 0.09510603, 0.09675027],
    ...      [0.03405071, 0.03295077, 0.03702047]]
    ... )
    >>> m_2 = np.array(
    ...     [[0.15579559, 0.09715755, 0.07514556],
    ...      [0.39113140, 0.25943419, 0.21266708],
    ...      [0.12824821, 0.18463570, 0.31508023],
    ...      [0.12028974, 0.13455659, 0.07408400],
    ...      [0.19368988, 0.21158946, 0.37955964],
    ...      [0.19957425, 0.36085439, 0.40678123],
    ...      [0.48896605, 0.20691688, 0.05816533],
    ...      [0.09775522, 0.16710693, 0.47147724],
    ...      [0.39358649, 0.12233400, 0.10526425],
    ...      [0.10780332, 0.07258529, 0.16151473],
    ...      [0.27502671, 0.34705454, 0.09728099],
    ...      [0.43980441, 0.26880559, 0.05430533],
    ...      [0.05887212, 0.11126272, 0.38552469],
    ...      [0.12705825, 0.25787860, 0.13566464],
    ...      [0.35612929, 0.07933258, 0.05118732],
    ...      [0.48131976, 0.42082843, 0.07120612],
    ...      [0.34665585, 0.15170714, 0.24969804],
    ...      [0.08261116, 0.24588716, 0.48707733],
    ...      [0.66054904, 0.65941137, 0.66376412],
    ...      [0.48051509, 0.47870296, 0.48230082],
    ...      [0.33045354, 0.32904184, 0.33228886],
    ...      [0.18001305, 0.17978567, 0.18004416],
    ...      [0.10283975, 0.10424680, 0.10384975],
    ...      [0.04742204, 0.04772203, 0.04914226]]
    ... )
    >>> first_order_colour_fit(m_1, m_2)  # doctest: +ELLIPSIS
    array([[ 0.6982266...,  0.0307162...,  0.1621042...],
           [ 0.0689349...,  0.6757961...,  0.1643038...],
           [-0.0631495...,  0.0921247...,  0.9713415...]])
    """

    return np.transpose(np.linalg.lstsq(m_1, m_2)[0])

The reverse for sRGB OETF is as follows:

def oetf_reverse_sRGB(V):
    """
    Defines the *sRGB* colourspace reverse opto-electronic transfer function
    (OETF / OECF).

    Parameters
    ----------
    V : numeric or array_like
        Electrical signal :math:`V`.

    Returns
    -------
    numeric or ndarray
        Corresponding *luminance* :math:`L` of the image.

    References
    ----------
    -   :cite:`InternationalElectrotechnicalCommission1999a`
    -   :cite:`InternationalTelecommunicationUnion2015i`

    Examples
    --------
    >>> oetf_reverse_sRGB(0.461356129500442)  # doctest: +ELLIPSIS
    0.1...
    """

    V = np.asarray(V)

    return as_numeric(
        np.where(V <= oetf_sRGB(0.0031308), V / 12.92, ((V + 0.055) / 1.055) **
                 2.4))

Upvotes: 3

user2261062
user2261062

Reputation:

I did a calibration algorithm some time ago for a Basler camera. Basically the camera provides some parameters that can be adjusted, such as RGB, alpha, exposure, contrast, but you can also correct the image later on in your computer if the camera you use cannot be calibrated.

My experience is that to fine tune a color checker you need more than just the three RGB values.

What I end up doing is a gradient descent algorithm.

Basically start at some point that is good enough.

Then generate children by modifying individual parameters or combinations of them.

Take the best child and repeat until convergence.

At the very end I was also calibrating whites, to make sure that for a pure white color there's no color shift.

Also your image is noisy and dirty so to analyze the current color, take several pixels (hundreds of them) and remove outliers. Finally take the median as your current color. This way you will filter the noise of the image as much as you can.

Upvotes: 1

Related Questions