Reputation: 6662
I'm trying to develop some color filters for images in my app. We use lookup images as they are easy to use to copy filters from other programs, and convenient to get the same results across platforms. Our lookups look like this
Previously, we've used GPUImage
for filtering with lookup images, but I'd like to avoid this dependency as it's a whopping 5.4mb and we only need this one feature.
After searching for a couple of hours, I can't seem to find any resources as to how I can use a lookup image to filter an image trough CoreImage
. However, looking at the docs, CIColorMatrix
looks like the proper tool. The catch here is that I'm way too dumb to understand how this works. Which brings me to my question;
Does anyone have an example of how to use CIColorMatrix
to filter an image from lookups? (or any pointers as to how I should proceed to figure it out myself)
I've scraped trough GPUImage
s code, and it looks like the shaders they use to filter from lookup images are defined as follows:
Lookup image shader:
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
// lookup texture uniform float intensity;
void main() {
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
float blueColor = textureColor.b * 63.0;
vec2 quad1;
quad1.y = floor(floor(blueColor) / 8.0);
quad1.x = floor(blueColor) - (quad1.y * 8.0);
vec2 quad2;
quad2.y = floor(ceil(blueColor) / 8.0);
quad2.x = ceil(blueColor) - (quad2.y * 8.0);
vec2 texPos1;
texPos1.x = (quad1.x * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.r);
texPos1.y = (quad1.y * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.g);
vec2 texPos2;
texPos2.x = (quad2.x * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.r);
texPos2.y = (quad2.y * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.g);
vec4 newColor1 = texture2D(inputImageTexture2, texPos1);
vec4 newColor2 = texture2D(inputImageTexture2, texPos2);
vec4 newColor = mix(newColor1, newColor2, fract(blueColor));
gl_FragColor = mix(textureColor, vec4(newColor.rgb, textureColor.w), intensity);
}
In addition to this vertex shader:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
attribute vec4 inputTextureCoordinate2;
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
void main() {
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
textureCoordinate2 = inputTextureCoordinate2.xy;
}
Can/should I rather create my own filter using these shaders?
Upvotes: 3
Views: 2096
Reputation: 6662
All credit for this answer goes to Nghia Tran. If you see this, thank you!
Turns out, there was one single answer out there all along. Nghia Tran wrote an article here where he solves my exact use case.
He kindly provided an extension to generate a CIFilter
from a lookup image, which I will paste below to preserve this answer for future developers.
You'll need to import CIFilter+LUT.h
in your bridging header if you're using Swift.
Here's a snippet that demonstrates using it on the GPU in Swift 4. This is far from optimized, the context etc should be cached, but its a good starting point.
static func applyFilter(with lookupImage: UIImage, to image: UIImage) -> UIImage? {
guard let cgInputImage = image.cgImage else {
return nil
}
guard let glContext = EAGLContext(api: .openGLES2) else {
return nil
}
let ciContext = CIContext(eaglContext: glContext)
guard let lookupFilter = CIFilter(lookupImage: lookupImage, dimension: 64) else {
return nil
}
lookupFilter.setValue(CIImage(cgImage: cgInputImage),
forKey: "inputImage")
guard let output = lookupFilter.outputImage else {
return nil
}
guard let cgOutputImage = ciContext.createCGImage(output, from: output.extent) else {
return nil
}
return UIImage(cgImage: cgOutputImage)
}
CIFilter+LUT.h
#import <CoreImage/CoreImage.h>
@import UIKit.UIImage;
@class CIFilter;
@interface CIFilter (LUT)
+(CIFilter *) filterWithLookupImage:(UIImage *)image dimension:(NSInteger) n;
@end
CIFilter+LUT.m
#import "CIFilter+LUT.h"
#import <CoreImage/CoreImage.h>
#import <OpenGLES/EAGL.h>
@implementation CIFilter (LUT)
+(CIFilter *)filterWithLookupImage:(UIImage *)image dimension:(NSInteger)n {
NSInteger width = CGImageGetWidth(image.CGImage);
NSInteger height = CGImageGetHeight(image.CGImage);
NSInteger rowNum = height / n;
NSInteger columnNum = width / n;
if ((width % n != 0) || (height % n != 0) || (rowNum * columnNum != n)) {
NSLog(@"Invalid colorLUT");
return nil;
}
unsigned char *bitmap = [self createRGBABitmapFromImage:image.CGImage];
if (bitmap == NULL) {
return nil;
}
NSInteger size = n * n * n * sizeof(float) * 4;
float *data = malloc(size);
int bitmapOffest = 0;
int z = 0;
for (int row = 0; row < rowNum; row++) {
for (int y = 0; y < n; y++) {
int tmp = z;
for (int col = 0; col < columnNum; col++) {
for (int x = 0; x < n; x++) {
float r = (unsigned int)bitmap[bitmapOffest];
float g = (unsigned int)bitmap[bitmapOffest + 1];
float b = (unsigned int)bitmap[bitmapOffest + 2];
float a = (unsigned int)bitmap[bitmapOffest + 3];
NSInteger dataOffset = (z*n*n + y*n + x) * 4;
data[dataOffset] = r / 255.0;
data[dataOffset + 1] = g / 255.0;
data[dataOffset + 2] = b / 255.0;
data[dataOffset + 3] = a / 255.0;
bitmapOffest += 4;
}
z++;
}
z = tmp;
}
z += columnNum;
}
free(bitmap);
CIFilter *filter = [CIFilter filterWithName:@"CIColorCube"];
[filter setValue:[NSData dataWithBytesNoCopy:data length:size freeWhenDone:YES] forKey:@"inputCubeData"];
[filter setValue:[NSNumber numberWithInteger:n] forKey:@"inputCubeDimension"];
return filter;
}
+ (unsigned char *)createRGBABitmapFromImage:(CGImageRef)image {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
unsigned char *bitmap;
NSInteger bitmapSize;
NSInteger bytesPerRow;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
bytesPerRow = (width * 4);
bitmapSize = (bytesPerRow * height);
bitmap = malloc(bitmapSize);
if (bitmap == NULL) {
return NULL;
}
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
free(bitmap);
return NULL;
}
context = CGBitmapContextCreate (bitmap, width, height, 8, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
free (bitmap);
}
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
return bitmap;
}
@end
Upvotes: 1