Kevin_TA
Kevin_TA

Reputation: 4675

AVCaptureSession Display is White (no Video)

I am using an AVCaptureSession with an output setting of:

NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];

My AVCaptureVideoPreviewLayer is displaying fine but I need more than this since I have had no success getting a screen shot using the AVCaptureVideoPreviewLayer. So when creating a CGContextRef within the captureOutput delegate, I am using these settings

uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer); 
size_t height = CVPixelBufferGetHeight(imageBuffer);

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, width * 4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);

I am no longer receiving an 'unsupported parameter combination' warning, but the display is just plain white.

I should add that when I change

NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];

to

NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];

Everything works fine. What is my problem?

Upvotes: 4

Views: 796

Answers (2)

onmyway133
onmyway133

Reputation: 48085

Take a look at InvasiveCode's tutorial. It shows how to use Accelerate and CoreImage framework to process the Y channel

Upvotes: 0

tetuje
tetuje

Reputation: 289

Take a look to the following code (it uses FullVideoRange instead) which converts "by hand" a bi-planar video frame into a RGB.

CVPixelBufferLockBaseAddress(imageBuffer, 0);

size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);

uint8_t *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;

NSUInteger yOffset = EndianU32_BtoN(bufferInfo->componentInfoY.offset);
NSUInteger yPitch = EndianU32_BtoN(bufferInfo->componentInfoY.rowBytes);

NSUInteger cbCrOffset = EndianU32_BtoN(bufferInfo->componentInfoCbCr.offset);
NSUInteger cbCrPitch = EndianU32_BtoN(bufferInfo->componentInfoCbCr.rowBytes);

uint8_t *rgbBuffer = malloc(width * height * 3);
uint8_t *yBuffer = baseAddress + yOffset;
uint8_t *cbCrBuffer = baseAddress + cbCrOffset;

for(int y = 0; y < height; y++)
{
uint8_t *rgbBufferLine = &rgbBuffer[y * width * 3];
uint8_t *yBufferLine = &yBuffer[y * yPitch];
uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];

for(int x = 0; x < width; x++)
{
    uint8_t y = yBufferLine[x];
    uint8_t cb = cbCrBufferLine[x & ~1];
    uint8_t cr = cbCrBufferLine[x | 1];

    uint8_t *rgbOutput = &rgbBufferLine[x*3];

    // from ITU-R BT.601, rounded to integers
    rgbOutput[0] = (298 * (y - 16) + 409 * cr - 223) >> 8;
    rgbOutput[1] = (298 * (y - 16) + 100 * cb + 208 * cr + 136) >> 8;
    rgbOutput[2] = (298 * (y - 16) + 516 * cb - 277) >> 8;
}
}

The following link may be useful as well to better understand this video format: http://blog.csdn.net/yiheng_l/article/details/3790219#yuvformats_nv12

Upvotes: 2

Related Questions