Reputation: 157
I'm using AVFoundation to take a video and I'm recording in kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
format. I want to make grayscale image directly from the Y plane of the YpCbCr format.
I've tried to create CGContextRef
by calling CGBitmapContextCreate
, but the problem is, that I don't know what colorspace and pixelformat to choose.
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
/* Get informations about the Y plane */
uint8_t *YPlaneAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
size_t width = CVPixelBufferGetWidthOfPlane(imageBuffer, 0);
size_t height = CVPixelBufferGetHeightOfPlane(imageBuffer, 0);
/* the problematic part of code */
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef newContext = CGBitmapContextCreate(YPlaneAddress,
width, height, 8, bytesPerRow, colorSpace, kCVPixelFormatType_1Monochrome);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
UIImage *grayscaleImage = [[UIImage alloc] initWithCGImage:newImage];
// process the grayscale image ...
}
When I run the code above, I got this errors:
<Error>: CGBitmapContextCreateImage: invalid context 0x0
<Error>: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 16 bits/pixel; 1-component color space; kCGImageAlphaPremultipliedLast; 192 bytes/row.
PS: Sorry for my english.
Upvotes: 4
Views: 2139
Reputation: 78835
If I'm not mistaken, you shouldn't go via a CGContext
. Instead, you should create a data provider and then directly the image.
Another mistake in your code is the use of the kCVPixelFormatType_1Monochrome
constant. It's a constant used in video processing (AV libraries), not in Core Graphics (CG libraries). Just use kCGImageAlphaNone
. That a single component (gray) per pixel is needed (instead of three as for RGB) is derived from the color space.
It could look like this:
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, YPlaneAdress,
height * bytesPerRow, NULL);
CGImageRef newImage = CGImageCreate(width, height, 8, 8, bytesPerRow,
colorSpace, kCGImageAlphaNone, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
Upvotes: 2