Reputation: 372
To give this question some context (ho ho):
I am subclassing CIFilter under iOS for the purpose of creating some custom photo-effect filters. As per the documentation, this means creating a "compound" filter that encapsulates one or more pre-existing CIFilters within the umbrella of my custom CIFilter subclass.
All well and good. No problems there. For the sake of example, let's say I encapsulate a single CIColorMatrix filter which has been preset with certain rgba input vectors.
When applying my custom filter (or indeed CIColorMatrix alone), I see radically different results when using a CIContext with colour management on versus off. I am creating my contexts as follows:
Colour management on:
CIContext * context = [CIContext contextWithOptions:nil];
Colour management off:
NSDictionary *options = @{kCIContextWorkingColorSpace:[NSNull null], kCIContextOutputColorSpace:[NSNull null]};
CIContext * context = [CIContext contextWithOptions:options];
Now, this is no great surprise. However, I have noticed that all of the pre-built CIPhotoEffect CIFilters, e.g. CIPhotoEffectInstant, are essentially invariant under those same two colour management conditions.
Can anyone lend any insight as to what gives them this property? For example, do they themselves encapsulate particular CIFilters that may be applied with similar invariance?
My goal is to create some custom filters with the same property, without being limited to chaining only CIPhotoEffect filters.
--
Edit: Thanks to YuAo, I have assembled some working code examples which I post here to help others:
Programmatically generated CIColorCubeWithColorSpace CIFilter, invariant under different colour management schemes / working colour space:
self.filter = [CIFilter filterWithName:@"CIColorCubeWithColorSpace"];
[self.filter setDefaults];
int cubeDimension = 2; // Must be power of 2, max 128
int cubeDataSize = 4 * cubeDimension * cubeDimension * cubeDimension; // bytes
float cubeDataBytes[8*4] = {
0.0, 0.0, 0.0, 1.0,
0.1, 0.0, 1.0, 1.0,
0.0, 0.5, 0.5, 1.0,
1.0, 1.0, 0.0, 1.0,
0.5, 0.0, 0.5, 1.0,
1.0, 0.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0
};
NSData *cubeData = [NSData dataWithBytes:cubeDataBytes length:cubeDataSize * sizeof(float)];
[self.filter setValue:@(cubeDimension) forKey:@"inputCubeDimension"];
[self.filter setValue:cubeData forKey:@"inputCubeData"];
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
[self.filter setValue:(__bridge id)colorSpace forKey:@"inputColorSpace"];
[self.filter setValue:sourceImageCore forKey:@"inputImage"];
CIImage *filteredImageCore = [self.filter outputImage];
CGColorSpaceRelease(colorSpace);
The docs state:
To provide a CGColorSpaceRef object as the input parameter, cast it to type id. With the default color space (null), which is equivalent to kCGColorSpaceGenericRGBLinear, this filter’s effect is identical to that of CIColorCube.
I wanted to go further and be able to read in cubeData from a file. So-called Hald Colour Look-up Tables, or Hald CLUT images may be used to defining a mapping from input colour to output colour.
With help from this answer, I assembled the code to do this also, reposted here for convenience.
Hald CLUT image based CIColorCubeWithColorSpace CIFilter, invariant under different colour management schemes / working colour space:
Usage:
NSData *cubeData = [self colorCubeDataFromLUT:@"LUTImage.png"];
int cubeDimension = 64;
[self.filter setValue:@(cubeDimension) forKey:@"inputCubeDimension"];
[self.filter setValue:cubeData forKey:@"inputCubeData"];
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB); // or whatever your image's colour space
[self.filter setValue:(__bridge id)colorSpace forKey:@"inputColorSpace"];
[self.filter setValue:sourceImageCore forKey:@"inputImage"];
Helper Methods (which use Accelerate Framework):
- (nullable NSData *) colorCubeDataFromLUT:(nonnull NSString *)name
{
UIImage *image = [UIImage imageNamed:name inBundle:[NSBundle bundleForClass:self.class] compatibleWithTraitCollection:nil];
static const int kDimension = 64;
if (!image) return nil;
NSInteger width = CGImageGetWidth(image.CGImage);
NSInteger height = CGImageGetHeight(image.CGImage);
NSInteger rowNum = height / kDimension;
NSInteger columnNum = width / kDimension;
if ((width % kDimension != 0) || (height % kDimension != 0) || (rowNum * columnNum != kDimension)) {
NSLog(@"Invalid colorLUT %@",name);
return nil;
}
float *bitmap = [self createRGBABitmapFromImage:image.CGImage];
if (bitmap == NULL) return nil;
// Convert bitmap data written in row,column order to cube data written in x:r, y:g, z:b representation where z varies > y varies > x.
NSInteger size = kDimension * kDimension * kDimension * sizeof(float) * 4;
float *data = malloc(size);
int bitmapOffset = 0;
int z = 0;
for (int row = 0; row < rowNum; row++)
{
for (int y = 0; y < kDimension; y++)
{
int tmp = z;
for (int col = 0; col < columnNum; col++) {
NSInteger dataOffset = (z * kDimension * kDimension + y * kDimension) * 4;
const float divider = 255.0;
vDSP_vsdiv(&bitmap[bitmapOffset], 1, ÷r, &data[dataOffset], 1, kDimension * 4); // Vector scalar divide; single precision. Divides bitmap values by 255.0 and puts them in data, processes each column (kDimension * 4 values) at once.
bitmapOffset += kDimension * 4; // shift bitmap offset to the next set of values, each values vector has (kDimension * 4) values.
z++;
}
z = tmp;
}
z += columnNum;
}
free(bitmap);
return [NSData dataWithBytesNoCopy:data length:size freeWhenDone:YES];
}
- (float *)createRGBABitmapFromImage:(CGImageRef)image {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
unsigned char *bitmap;
NSInteger bitmapSize;
NSInteger bytesPerRow;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
bytesPerRow = (width * 4);
bitmapSize = (bytesPerRow * height);
bitmap = malloc( bitmapSize );
if (bitmap == NULL) return NULL;
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
free(bitmap);
return NULL;
}
context = CGBitmapContextCreate (bitmap,
width,
height,
8,
bytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease( colorSpace );
if (context == NULL) {
free (bitmap);
return NULL;
}
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
float *convertedBitmap = malloc(bitmapSize * sizeof(float));
vDSP_vfltu8(bitmap, 1, convertedBitmap, 1, bitmapSize); // Converts an array of unsigned 8-bit integers to single-precision floating-point values.
free(bitmap);
return convertedBitmap;
}
One may create a Hald CLUT Image by obtaining an identity image (Google!) and then applying to it the same image processing chain applied to the image used for visualising the "look" in any image editing program. Just make sure you set the cubeDimension in the example code to the correct dimension for the LUT image. If the dimension, d, is the number of elements along one side of the 3D LUT cube, the Hald CLUT image width and height would be d*sqrt(d) pixels and the image would have d^3 total pixels.
Upvotes: 1
Views: 1640
Reputation: 565
Here's how CIPhotoEffect/CIColorCubeWithColorSpace should work with color management on vs. off.
With color management ON here is what CI should do:
color match from input space to cube space. If these two are equal this is a noop.
apply the color cube.
color match from cube space to the output space. If these two are equal this is a noop.
With color management OFF here is what CI should do:
Upvotes: 1
Reputation: 1427
CIPhotoEffect
internally uses
CIColorCubeWithColorSpace
filter.
All the color cube data is stored within the CoreImage.framework
.
You can find the simulator's CoreImage.framework
here (/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk/System/Library/Frameworks/CoreImage.framework/
).
The color cube data is named with scube
path extension. e.g. CIPhotoEffectChrome.scube
CIColorCubeWithColorSpace
internally covert the color cube color values to match the working color space of the current core image context by using private methods:
-[CIImage _imageByMatchingWorkingSpaceToColorSpace:];
-[CIImage _imageByMatchingColorSpaceToWorkingSpace:];
Upvotes: 2