MatterGoal
MatterGoal

Reputation: 16430

Implementing Color Transfer between two images with a Core Image custom filter

I'm stuck trying to figure out how to perform a color transfer from a source image to another image.

The filter that I'd like to create takes 2 images (ImageA and ImageB) of the same exact size. The filter results in an output image that changes all the pixels of a color from imageA to the color at the same pixel position of imageB.

Check the image. I needed to change the bluish pixels into green and purple pixels using ImageA as source and ImageB as a sort of color mask. As you can see only the bluish areas that overlaps the green and the purple area have been changed (consider the gray color as transparent...)

enter image description here

My questions are: 1) is it something doable? 2) should I use a general kernel? from my understanding a color kernel should work too but I'm not sure I can pass 2 images to a color kernel.

Could you provide a kernel code example?

A very simple non optimized pseudo code to execute on each pixel could be something like:

color func (source imageA, source imageB, color colorToChange) {

    if imageA.currentpixel.color == colorToChange{
      if imageB.getPixel(imageA.currentPixel).color not transparent{
        return imageB.getPixel(imageA.currentPixel).color
      }
    }
    return imageA.currentPixel.color
}

And this is the current filter that I'm using (I'm obtaining strange results with it though)

kernel vec4 coreImageKernel(sampler image, sampler msk, __color color)
{
    vec4 a = sample(image, samplerCoord(image));

    if (length(a - color) == 0.0){
        return sample(msk, samplerCoord(msk));
    }else{
        return a;
    }   
}

Upvotes: 0

Views: 416

Answers (1)

user7014451
user7014451

Reputation:

It's possible with a CIKernel, but I don't believe with a simple CIColorKernel, since you need to work with two CIImages.

It's still pretty simple code. You need three inputs, in your example, "IMAGE A", "IMAGE B", and the image B pixel color to ignore. Since CoreImage works pixel-by-pixel, you just need to work with the current pixel in both images.

The basic code is this:

kernel vec4 createResult(sampler imageA, sampler imageB, vec4 backgroundColor) {
    vec4 pixelA = sample(imageA, samplerCoord(imageA));
    vec4 pixelB = sample(imageB, samplerCoord(imageB));
    if (pixelA == backgroundColor) {
        return pixelA;
    } else if (pixelB == backgroundColor) {
        return pixelA;
    } else {
        return pixelB;
    }
}

This is untested kernel code, so it may have a syntax error. But here's the logic:

  • pass in a pixel from both images along with the background color
  • if pixelA is the background color, it's not a "dot" so do not change it
  • if pixelB is the background color, it's not a row of dots you are contents about so do not change it
  • if pixelB isn't the background color, output it

Note that bullet #4 can only be reach if both pixels aren't the background color.

One final note - Apple's decision to deprecate OpenGL. I spent a week after WWDC '18 working on changing my kernel code to Metal 2 code and wasn't (yet) successful. Color kernels? Easy. But something with both warp and general kernels... related to getting surrounding pixels is still eluding me. I think it's related to how I'm coding samplerTransform, but haven't had the time to work through it.

You should be good to use this as a "Metal-based" kernel since I did duplicate a simple pass-through as a CIKernel. Just be aware!

Upvotes: 2

Related Questions