RRR
RRR

Reputation: 315

How do I tell a CIKernel to only process specific regions of an image?

I have a simple CIKernel that tints pixel red:

extern "C" float4 BasicFilter (coreimage::sampler input, coreimage::destination dest) {
    float4 inputPixel = input.sample(input.transform(dest.coord()));
    
    float4 color = float4(1.0, 0.0, 0.0, 0.5);
    
    float r = (inputPixel.r * (1 - color.a)) + (color.r * color.a);
    float g = (inputPixel.g * (1 - color.a)) + (color.g * color.a);
    float b = (inputPixel.b * (1 - color.a)) + (color.b * color.a);
    float a = 1.0;
    
    return float4(r,g,b,a);
}

By default, this kernel runs on ever single pixel, returning a red, tinted image. I'd like it to work in real time, every frame. For this reason, I think it's important to limit it to processing only the pixels in a specific region of an image. How do I do that?

For example:

I could write an if-then statement in the kernel to only tint the pixels contained in the CGRect. However, all the other pixels would still pass through the kernel.

In other words, no matter how small the CGRect, the above kernel will process all 67,108,864 pixels in the 8192 x 8192 image, instead of 100 pixels in the CGRect above.

Note, I'm not looking for a cropped image. I'd still like the output to be 8192 x 8129 pixels, with a tinted 10 x 10 square at x:20, y:20.

I thought the ROI callback function might be the solution, but it still appears the entire image is being tinted red, not just the region I specified. This is the ROI Callback I supplied:

let roiCallback: CIKernelROICallback = { index, rect -> CGRect in
                return CGRectCGRect(x: 20, y: 20, width: 10, height: 10)
}

I have also tried:

Step 1: Crop the input image to the region needed and processing it; Step 2: Composite output of step 1 over original input image.

This did not result in higher FPS. It appears as though in Step 2 all 67,108,164 pixels still have to be processed by CISourceOverCompositing anyway, so not an improvement.

I'm rendering the final output in a MTKView.

Is there a more efficient solution?

Edit 01:

I tried Frank Rupprecht's suggestion below. This was the output:

enter image description here

If it helps, this is the what the rendering stage looks like:

  1. Render output to CVPixelBuffer
  2. Convert CVPixelBuffer to CIImage and store CImage for next pass through kernel in the next frame.
  3. Scale and translate CIImage from step 2 to display in MTKView drawable.

The reason to make an intermediate render in step 2 is to hold on to the image in its original resolution, and is not changed by the scaling and translating that happens when rendering to the drawable.

Upvotes: 2

Views: 161

Answers (1)

Frank Rupprecht
Frank Rupprecht

Reputation: 10408

You can specify the region the kernel is applied to using the extend parameter in the apply(...) method:

kernel.apply(extent: CGRect(x: 20, y: 20, width: 10, height: 10), roiCallback:...)

This should only run the kernel on the specified region of the image. You don't need to change the ROICallback since this only specifies what part of the input image is needed to produce the output of the rendered region, so just returning the input rect in the roiCallback should be fine.

Another optimization: Since you only ever read the single pixel at the output coordinate from your input (and thereby have a 1:1 mapping from input to output), you can use a CIColorKernel instead by changing your kernel code like this:

extern "C" float4 BasicFilter (coreimage::sample input) {
    float4 inputPixel = input; // this is already just a single pixel value
    
    float4 color = float4(1.0, 0.0, 0.0, 0.5);
    
    float r = (inputPixel.r * (1 - color.a)) + (color.r * color.a);
    float g = (inputPixel.g * (1 - color.a)) + (color.g * color.a);
    float b = (inputPixel.b * (1 - color.a)) + (color.b * color.a);
    float a = 1.0;
    
    return float4(r,g,b,a);
}

And then just initialize the kernel as a CIColorKernel instead of a CIKernel. On apply, you also don't need to specify a roiCallback then since Core Image knows it's a 1:1 mapping.

Upvotes: 1

Related Questions