Reputation: 11617
I am writing an application to show stats on the light conditions as seen by the iphone camera. I take an image every second, and the performing calculations on it.
To capture an image, I am using the following method:
-(void) captureNow
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in captureManager.stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[captureManager.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
latestImage = [[UIImage alloc] initWithData:imageData];
}];
}
However, the captureStillImageAsynhronously....
method causes the 'shutter' sound to be played by the phone, which is no good for my application, as it will be capturing images constantly.
I have read that it is not possible to disable this sound effect. Instead, I want to capture frames from the video input for the phone:
AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:nil];
and hopefully turn these into UIImage
objects.
How would I achieve this? I don't know that much about how the AVFoundation stuff is working - I downloaded some example code and modified it for my purposes.
Upvotes: 2
Views: 2220
Reputation: 170317
Don't use a still camera for this. Instead, grab from the video camera of the device and process the data contained within the pixel buffer you get in response to being an AVCaptureVideoDataOutputSampleBufferDelegate.
You can set up a video connection using code like the following:
// Grab the back-facing camera
AVCaptureDevice *backFacingCamera = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
if ([device position] == AVCaptureDevicePositionBack)
{
backFacingCamera = device;
}
}
// Create the capture session
captureSession = [[AVCaptureSession alloc] init];
// Add the video input
NSError *error = nil;
videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:backFacingCamera error:&error] autorelease];
if ([captureSession canAddInput:videoInput])
{
[captureSession addInput:videoInput];
}
// Add the video frame output
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
if ([captureSession canAddOutput:videoOutput])
{
[captureSession addOutput:videoOutput];
}
else
{
NSLog(@"Couldn't add video output");
}
// Start capturing
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
if (![captureSession isRunning])
{
[captureSession startRunning];
};
You'll then need to process these frames in a delegate method that looks like the following:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(cameraFrame, 0);
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
// Process pixel buffer bytes here
CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
}
The raw pixel bytes for your BGRA image will be contained within the array starting at CVPixelBufferGetBaseAddress(cameraFrame)
. You can iterate over those to obtain your desired values.
However, you'll find that any operation performed over the entire image on the CPU will be a little slow. You can use Accelerate to help with an average color operation, like you want here. I've used vDSP_meanv()
in the past to average luminance values, once you have those in an array. For something like that, you might be best served to grab YUV planar data from the camera instead of the BGRA values I pull down here.
I've also written an open source framework to process video using OpenGL ES, although I don't yet have whole-image reduction operations in there like you'd need for the kind of image analysis here. My histogram generator is probably the closest I have to what you're trying to do.
Upvotes: 5