Reputation: 2571
I have a CGImage and I want to determine whether it is majorly bright or majorly dark. I can surely just iterate through the matrix and see whether a sufficient number of pixels exceed the desired threshold. However, since I am new to image processing, I assume there must be built-in functions in CoreGraphics or Quartz that are better suited, and maybe even accelerated.
Upvotes: 1
Views: 827
Reputation: 1525
Here's how to use CIAreaAverage in an iOS app:
CGRect inputExtent = [self.inputImage extent];
CIVector *extent = [CIVector vectorWithX:inputExtent.origin.x
Y:inputExtent.origin.y
Z:inputExtent.size.width
W:inputExtent.size.height];
CIImage* inputAverage = [CIFilter filterWithName:@"CIAreaAverage" keysAndValues:@"inputImage", self.inputImage, @"inputExtent", extent, nil].outputImage;
//CIImage* inputAverage = [self.inputImage imageByApplyingFilter:@"CIAreaMinimum" withInputParameters:@{@"inputImage" : inputImage, @"inputExtent" : extent}];
EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSDictionary *options = @{ kCIContextWorkingColorSpace : [NSNull null] };
CIContext *myContext = [CIContext contextWithEAGLContext:myEAGLContext options:options];
size_t rowBytes = 32 ; // ARGB has 4 components
uint8_t byteBuffer[rowBytes]; // Buffer to render into
[myContext render:inputAverage toBitmap:byteBuffer rowBytes:rowBytes bounds:[inputAverage extent] format:kCIFormatRGBA8 colorSpace:nil];
const uint8_t* pixel = &byteBuffer[0];
float red = pixel[0] / 255.0;
float green = pixel[1] / 255.0;
float blue = pixel[2] / 255.0;
NSLog(@"%f, %f, %f\n", red, green, blue);
return outputImage;
}
@end
Upvotes: 1
Reputation: 1525
There are faster, more effective ways of measuring that specific metric than by an intensity histogram—if, in fact, all you intend to do with it is measurements.
Image keying is another; it is determined by the sum-average of intensities, but doesn't require binning them. The value returned by an image keying formula (which I have, if you need) can be used for local adaptive tonal-range mapping (what you want) using a simple gamma adjustment (the intensity value of a pixel, raised by one over the image key value).
This is not hard, and it's clear that you have the skills and experience to employ this faster and more effective way of differentiating between a light and dark image.
What's more is, you should establish a pattern and practice of using image-metric formulas instead of histograms wherever you can. They are designed to interpret information, not just collect it. Not only that, but they are often interoperable, meaning they can be stacked one on top of the other, just like Core Image filters.
For specifics, read:
Gamma Correction with Adaptation to the Image Key, on page 14 of Tone Mapping for High Dynamic Range Images by Laurence Meylan.
Upvotes: 0
Reputation: 385950
CoreGraphics (aka Quartz 2D) doesn't have any functions for this. CoreImage on Mac OS X has CIAreaAverage
and CIAreaHistogram
, which might help you, but I don't think iOS (as of 5.0.1) has those filters.
iOS does have the Accelerate framework. The vImageHistogramCalculation_ARGBFFFF
function and related functions might help you.
Upvotes: 2