Elmi
Elmi

Reputation: 6193

Algorithm for shrinking/limiting palette of an image

as input data I have a 24 bit RGB image and a palette with 2..20 fixed colours. These colours are in no way spread regularly over the full colour range.

Now I have to modify the colours of input image so that only the colours of the given palette are used - using the colour out of the palette that is closest to the original colour (not closest mathematically but for human's visual impression). So what I need is an algorithm that uses an input colour and finds the colour in target palette that visually fits best to this colour. Please note: I'm not looking for a stupid comparison/difference algorithm but for something that really incorporates the impression a colour has on humans!

Since this is something that already should have been done and because I do not want to re-invent the wheel again: is there some example source code out there that does this job? In best case it is really a piece of code and not a link to a desastrous huge library ;-)

(I'd guess OpenCV does not provide such a function?)

Thanks

Upvotes: 3

Views: 2618

Answers (2)

sietschie
sietschie

Reputation: 7543

You should look at the Lab color space. It was designed so that the distance in the colour space equals the perceptual distance. So once you have converted your image you can compute the distances as you would have done earlier, but should get a better result from a perceptual point of view. In OpenCV you can use the cvtColor(source, destination, CV_BGR2Lab) function.

Another Idea would be to use dithering. The idea is to mix missing colours using neighbouring pixels. A popular algorithm for this is Floyd-Steinberg dithering.

Here is an example of mine, where I combined a optimized palette using k-means with the Lab colourspace and floyd steinberg dithering:

#include <opencv2/opencv.hpp>
#include <iostream>

using namespace cv;
using namespace std;

cv::Mat floydSteinberg(cv::Mat img, cv::Mat palette);
cv::Vec3b findClosestPaletteColor(cv::Vec3b color, cv::Mat palette);

int main(int argc, char** argv)
{
    // Number of clusters (colors on result image)
    int nrColors = 18;

    cv::Mat imgBGR = imread(argv[1],1);

    cv::Mat img;
    cvtColor(imgBGR, img, CV_BGR2Lab);


    cv::Mat colVec = img.reshape(1, img.rows*img.cols); // change to a Nx3 column vector

    cv::Mat colVecD;
    colVec.convertTo(colVecD, CV_32FC3, 1.0); // convert to floating point


    cv::Mat labels, centers;
    cv::kmeans(colVecD, nrColors, labels,
            cv::TermCriteria(CV_TERMCRIT_ITER, 100, 0.1),
            3, cv::KMEANS_PP_CENTERS, centers); // compute k mean centers

    // replace pixels by there corresponding image centers
    cv::Mat imgPosterized = img.clone();
    for(int i = 0; i < img.rows; i++ )
        for(int j = 0; j < img.cols; j++ )
            for(int k = 0; k < 3; k++)
                imgPosterized.at<Vec3b>(i,j)[k] = centers.at<float>(labels.at<int>(j+img.cols*i),k);

    // convert palette back to uchar
    cv::Mat palette;
    centers.convertTo(palette,CV_8UC3,1.0);

    // call floyd steinberg dithering algorithm
    cv::Mat fs = floydSteinberg(img, palette);

    cv::Mat imgPosterizedBGR, fsBGR;
    cvtColor(imgPosterized, imgPosterizedBGR, CV_Lab2BGR);
    cvtColor(fs, fsBGR, CV_Lab2BGR);


    imshow("input",imgBGR); // original image
    imshow("result",imgPosterizedBGR); // posterized image
    imshow("fs",fsBGR); // floyd steinberg dithering
    waitKey();

  return 0;
}

cv::Mat floydSteinberg(cv::Mat imgOrig, cv::Mat palette)
{
    cv::Mat img = imgOrig.clone();
    cv::Mat resImg = img.clone();
    for(int i = 0; i < img.rows; i++ )
        for(int j = 0; j < img.cols; j++ )
        {
            cv::Vec3b newpixel = findClosestPaletteColor(img.at<Vec3b>(i,j), palette);
            resImg.at<Vec3b>(i,j) = newpixel;

            for(int k=0;k<3;k++)
            {
                int quant_error = (int)img.at<Vec3b>(i,j)[k] - newpixel[k];
                if(i+1<img.rows)
                    img.at<Vec3b>(i+1,j)[k] = min(255,max(0,(int)img.at<Vec3b>(i+1,j)[k] + (7 * quant_error) / 16));
                if(i-1 > 0 && j+1 < img.cols)
                    img.at<Vec3b>(i-1,j+1)[k] = min(255,max(0,(int)img.at<Vec3b>(i-1,j+1)[k] + (3 * quant_error) / 16));
                if(j+1 < img.cols)
                    img.at<Vec3b>(i,j+1)[k] = min(255,max(0,(int)img.at<Vec3b>(i,j+1)[k] + (5 * quant_error) / 16));
                if(i+1 < img.rows && j+1 < img.cols)
                    img.at<Vec3b>(i+1,j+1)[k] = min(255,max(0,(int)img.at<Vec3b>(i+1,j+1)[k] + (1 * quant_error) / 16));
            }
        }
    return resImg;
}

float vec3bDist(cv::Vec3b a, cv::Vec3b b) 
{
    return sqrt( pow((float)a[0]-b[0],2) + pow((float)a[1]-b[1],2) + pow((float)a[2]-b[2],2) );
}

cv::Vec3b findClosestPaletteColor(cv::Vec3b color, cv::Mat palette)
{
    int i=0;
    int minI = 0;
    cv::Vec3b diff = color - palette.at<Vec3b>(0);
    float minDistance = vec3bDist(color, palette.at<Vec3b>(0));
    for (int i=0;i<palette.rows;i++)
    {
        float distance = vec3bDist(color, palette.at<Vec3b>(i));
        if (distance < minDistance)
        {
            minDistance = distance;
            minI = i;
        }
    }
    return palette.at<Vec3b>(minI);
}

Upvotes: 3

Andrey  Smorodov
Andrey Smorodov

Reputation: 10852

Try this algorithm (it will reduct color number, but it compute palette by itself):

#include <opencv2/opencv.hpp>
#include "opencv2/legacy/legacy.hpp"
#include <vector>
#include <list>
#include <iostream>

using namespace cv;
using namespace std;

void main(void)
{
    // Number of clusters (colors on result image)
    int NrGMMComponents = 32;
    // Source file name
    string fname="D:\\ImagesForTest\\tools.jpg";

    cv::Mat SampleImg = imread(fname,1);

    //cv::GaussianBlur(SampleImg,SampleImg,Size(5,5),3);

    int SampleImgHeight = SampleImg.rows;
    int SampleImgWidth  = SampleImg.cols;

    // Pick datapoints
    vector<Vec3d> ListSamplePoints;

    for (int y=0; y<SampleImgHeight; y++)
    {
        for (int x=0; x<SampleImgWidth; x++)
        {
            // Get pixel color at that position
            Vec3b bgrPixel = SampleImg.at<Vec3b>(y, x);

            uchar b = bgrPixel.val[0];
            uchar g = bgrPixel.val[1];
            uchar r = bgrPixel.val[2];
            if(rand()%25==0) // Pick not every, bu t every 25-th
            {
            ListSamplePoints.push_back(Vec3d(b,g,r));
            }
        } // for (x)
    } // for (y)


    // Form training matrix
    Mat labels;

    int NrSamples = ListSamplePoints.size();    
    Mat samples( NrSamples, 3, CV_32FC1 );

    for (int s=0; s<NrSamples; s++)
    {
        Vec3d v = ListSamplePoints.at(s);
        samples.at<float>(s,0) = (float) v[0];
        samples.at<float>(s,1) = (float) v[1];
        samples.at<float>(s,2) = (float) v[2];
    }    

    cout << "Learning to represent the sample distributions with" << NrGMMComponents << "gaussians." << endl;

    // Algorithm parameters
    CvEMParams params;
    params.covs      = NULL;
    params.means     = NULL;
    params.weights   = NULL;
    params.probs     = NULL;
    params.nclusters = NrGMMComponents;
    params.cov_mat_type       = CvEM::COV_MAT_GENERIC; // DIAGONAL, GENERIC, SPHERICAL
    params.start_step         = CvEM::START_AUTO_STEP;
    params.term_crit.max_iter = 1500;
    params.term_crit.epsilon  = 0.001;
    params.term_crit.type     = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS;
    //params.term_crit.type     = CV_TERMCRIT_ITER;

    // Train
    cout << "Started GMM training" << endl;
    CvEM em_model;
    em_model.train( samples, Mat(), params, &labels );
    cout << "Finished GMM training" << endl;

    // Result image
    Mat img  = Mat::zeros( Size( SampleImgWidth, SampleImgHeight ), CV_8UC3 );

    // Ask classifier for each pixel
    Mat sample( 1, 3, CV_32FC1 );
    Mat means;
    means=em_model.getMeans();
    for(int i = 0; i < img.rows; i++ )
    {
        for(int j = 0; j < img.cols; j++ )
        {
            Vec3b v=SampleImg.at<Vec3b>(i,j);
            sample.at<float>(0,0) = (float) v[0];
            sample.at<float>(0,1) = (float) v[1];
            sample.at<float>(0,2) = (float) v[2];

            int response = cvRound(em_model.predict( sample ));

            img.at<Vec3b>(i,j)[0]=means.at<double>(response,0);
            img.at<Vec3b>(i,j)[1]=means.at<double>(response,1);
            img.at<Vec3b>(i,j)[2]=means.at<double>(response,2);

        }
    }

    img.convertTo(img,CV_8UC3);
    imshow("result",img);
    waitKey();
    // Save the result
    cv::imwrite("result.png", img);

}

PS: For perceptive color distance measurement it's better to use L*a*b color space. There is converter in opencv for this purpose. For clustering you can use k-means with defined cluster centers (your palette entries). After clustering you'll get points with indexes of palette intries.

Upvotes: 1

Related Questions