Reputation: 223
I need to determine from the currently displayed screen, what NSColor is predominant (highest count in current bitmap palette) ... I built something that works but it's terribly slow ... I need to have this execute roughly 1 time per second (it currently takes over 6 seconds to process), and I'd like it to not hog the CPU (which is currently the case).
The part that's killing it is the 2 nested loops (width x height) that analyze each pixel. Is there a more efficient way to do this? I'm sure there is ... Any example?
Thanks!
#include "ScreenCapture.h"
#import <AVFoundation/AVFoundation.h>
@implementation ScreenCapture
@synthesize captureSession;
@synthesize stillImageOutput;
@synthesize stillImage;
//-----------------------------------------------------------------------------------------------------------------
- (id) init
{
if ((self = [super init]))
[self setCaptureSession:[[AVCaptureSession alloc] init]];
// main screen input
CGDirectDisplayID displayId = kCGDirectMainDisplay;
AVCaptureScreenInput *input = [[AVCaptureScreenInput alloc] initWithDisplayID:displayId];
[input setMinFrameDuration:CMTimeMake(1, 1)];
input.capturesCursor = 0;
input.capturesMouseClicks = 0;
if ([[self captureSession] canAddInput:input])
[[self captureSession] addInput:input];
// still image output
[self setStillImageOutput:[[AVCaptureStillImageOutput alloc] init]];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil];
[[self stillImageOutput] setOutputSettings:outputSettings];
if ([[self captureSession] canAddOutput:[self stillImageOutput]])
[[self captureSession] addOutput:[self stillImageOutput]];
// start capturing
[[self captureSession] startRunning];
return self;
}
//-----------------------------------------------------------------------------------------------------------------
- (NSColor* ) currentlyDominantColor
{
[self captureImage];
if ([self stillImage] != nil)
{
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc] initWithCIImage:[self stillImage]];
NSInteger pixelsWide = [imageRep pixelsWide];
NSInteger pixelsHigh = [imageRep pixelsHigh];
NSCountedSet* imageColors = [[NSCountedSet alloc] initWithCapacity:pixelsWide * pixelsHigh];
NSColor* dominantColor = nil;
NSUInteger highCount = 0;
for (NSUInteger x = 0; x < pixelsWide; x++)
{
for (NSUInteger y = 0; y < pixelsHigh; y++)
{
NSColor* color = [imageRep colorAtX:x y:y];
[imageColors addObject:color];
NSUInteger count = [imageColors countForObject:color];
if (count > highCount)
{
dominantColor = color;
highCount = count;
}
}
}
return dominantColor;
}
else
{
// dummy random color until an actual color gets computed
double r1 = ((double) arc4random() / 0x100000000);
double r2 = ((double) arc4random() / 0x100000000);
double r3 = ((double) arc4random() / 0x100000000);
return [NSColor colorWithCalibratedRed:r1 green:r2 blue:r3 alpha:1.0f];
}
}
//-----------------------------------------------------------------------------------------------------------------
- (void) captureImage
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in [[self stillImageOutput] connections])
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo])
{
videoConnection = connection;
break;
}
}
if (videoConnection)
break;
}
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
CIImage *image = [[CIImage alloc] initWithData:imageData];
[self setStillImage:image];
}];
}
//-----------------------------------------------------------------------------------------------------------------
- (void) dealloc
{
[[self captureSession] stopRunning];
captureSession = nil;
stillImageOutput = nil;
stillImage = nil;
}
@end
Upvotes: 1
Views: 322
Reputation: 1525
This code is not exactly what you asked for, but if you're not getting pixel values in this manner, the pixel values you do get will not be accurate. I don't know why.
Anyway, this is an answer to a spate of other questions re: getting image metrics, specifically, minimum, average and maximum values. Note how I obtained the pixel values. You need to do it like that. The only change you'll make to the code is to add a loop that iterates through each pixel according to height and width (a basic for loop here is all you need).
Here's my output...
2015-07-17 14:58:03.751 Chroma Photo Editing Extension[1945:155358] CIAreaMinimum output: 255, 27, 0, 0
2015-07-17 15:00:08.086 Chroma Photo Editing Extension[2156:157963] CIAreaAverage output: 255, 191, 166, 155
2015-07-17 15:01:24.047 Chroma Photo Editing Extension[2253:159246] CIAreaMaximum output: 255, 255, 255, 238
...from the following code (for iOS):
- (CIImage *)outputImage
{
[GlobalCIImage sharedSingleton].ciImage = self.inputImage;
CGRect inputExtent = [[GlobalCIImage sharedSingleton].ciImage extent];
CIVector *extent = [CIVector vectorWithX:inputExtent.origin.x
Y:inputExtent.origin.y
Z:inputExtent.size.width
W:inputExtent.size.height];
CIImage *inputAverage = [CIFilter filterWithName:@"CIAreaMaximum" keysAndValues:kCIInputImageKey, [GlobalCIImage sharedSingleton].ciImage, kCIInputExtentKey, extent, nil].outputImage;
size_t rowBytes = 4;
uint8_t byteBuffer[rowBytes];
[[GlobalContext sharedSingleton].ciContext render:inputAverage toBitmap:byteBuffer rowBytes:rowBytes bounds:[inputAverage extent] format:kCIFormatRGBA8 colorSpace:nil];
int width = inputAverage.extent.size.width;
int height = inputAverage.extent.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, width * 4, colorSpace, kCGBitmapAlphaInfoMask & kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [[GlobalContext sharedSingleton].ciContext createCGImage:inputAverage fromRect:CGRectMake(0, 0, width, height)]);
unsigned int *colorData = CGBitmapContextGetData(context);
unsigned int color = *colorData;
float inputRed = 0.0;
float inputGreen = 0.0;
float inputBlue = 0.0;
short a = color & 0xFF;
short r = (color >> 8) & 0xFF;
short g = (color >> 16) & 0xFF;
short b = (color >> 24) & 0xFF;
NSLog(@"CIAreaMaximum output: %d, %d, %d, %d", a, r, g, b);
*colorData = (unsigned int)(r << 8) + ((unsigned int)(g) << 16) + ((unsigned int)(b) << 24) + ((unsigned int)(a));
//NSLog(@"Second read: %i", colorData);
inputRed = r / 255.0;
inputGreen = g / 255.0;
inputBlue = b / 255.0;
CGContextRelease(context);
return [[self dissimilarityKernel] applyWithExtent:[GlobalCIImage sharedSingleton].ciImage.extent roiCallback:^CGRect(int index, CGRect rect) {
return CGRectMake(0, 0, CGRectGetWidth([GlobalCIImage sharedSingleton].ciImage.extent), CGRectGetHeight([GlobalCIImage sharedSingleton].ciImage.extent));
} arguments:@[[GlobalCIImage sharedSingleton].ciImage, [NSNumber numberWithFloat:inputRed], [NSNumber numberWithFloat:inputGreen], [NSNumber numberWithFloat:inputBlue]]];
}
Upvotes: 0
Reputation: 53010
Here is the outline of an algorithm which is a lot faster. Much of the slowness in your code comes from all the calls to colorAtX:y
- that involves getting the pixel, creating an NSColor
, etc. (profile your app to find out), and it all uses message dispatch. If you access the bitmap data directly you can do much better.
For example let's assume your bitmap is meshed (use isPlanar
to find out) and has 32-bit pixels (bitsPerPixel
), you can adjust for others.
bitmapData
) - this is effectively a C-array of uint32 pixels, its length is the number of pixels (totalBytes
/ 4)qsort
) which will give you runs of the same pixel value - yes it mucks up your image, but who cares you created it for this purposeNSColor
using colorWithColorSpace:components:count
- get the color space from the bitmap (colorSpace
) and the float values by extracting each byte from the pixel (shift & mask) and converting to a float in the range 0 to 1.HTH
Upvotes: 2
Reputation: 535566
Consider using the CIAreaAverage from CIFilter. It knows high-speed math operations better than ordinary mortals do!
Upvotes: 0