Reputation: 502
So I am getting raw YUV data in 3 separate arrays from a network callback (voip app). From what I understand you cannot create IOSurface backed pixel buffers with CVPixelBufferCreateWithPlanarBytes
according to here
Important: You cannot use CVPixelBufferCreateWithBytes() or CVPixelBufferCreateWithPlanarBytes() with kCVPixelBufferIOSurfacePropertiesKey. Calling CVPixelBufferCreateWithBytes() or CVPixelBufferCreateWithPlanarBytes() will result in CVPixelBuffers that are not IOSurface-backed
So thus you have to create it with CVPixelBufferCreate
, but how do you transfer the data from the call back to the CVPixelBufferRef
that you create?
- (void)videoCallBack(uint8_t *yPlane, uint8_t *uPlane, uint8_t *vPlane, size_t width, size_t height, size_t stride yStride,
size_t uStride, size_t vStride)
NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
I am unsure what to do afterwards here? Eventually I want to turn this into a CIImage which then I can use my GLKView to render the video. How do people "put" the data into the buffers from when you create it?
Upvotes: 9
Views: 13173
Reputation: 2090
Here is the full conversion in obj-c. Also for all those geniuses who say : "it's trivial" don't patronize anyone! if you are here to help, help, if you are here to show how "smart" you are, go do it somewhere else. Here is the link to detailed explanation on YUV processing : www.glebsoft.com
/// method to convert YUV buffers to pixelBuffer in otder to feed it to face unity methods
-(CVPixelBufferRef*)pixelBufferFromYUV:(uint8_t *)yBuffer vBuffer:(uint8_t *)uBuffer uBuffer:(uint8_t *)vBuffer width:(int)width height:(int)height {
NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBuffer;
/// NumberOfElementsForChroma is width*height/4 because both U plane and V plane are quarter size of Y plane
CGFloat uPlaneSize = width * height / 4;
CGFloat vPlaneSize = width * height / 4;
CGFloat numberOfElementsForChroma = uPlaneSize + vPlaneSize;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
///for simplicity and speed create a combined UV panel to hold the pixels
uint8_t *uvPlane = calloc(numberOfElementsForChroma, sizeof(uint8_t));
memcpy(uvPlane, uBuffer, uPlaneSize);
memcpy(uvPlane += (uint8_t)(uPlaneSize), vBuffer, vPlaneSize);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yBuffer, width * height);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CVPixelBufferRelease(pixelBuffer);
free(uvPlane);
return pixelBuffer;
}
Upvotes: 4
Reputation: 155
I had a similar question and here is what I have in SWIFT 2.0 with informations that I got from answers to others questions or links.
func generatePixelBufferFromYUV2(inout yuvFrame: YUVFrame) -> CVPixelBufferRef?
{
var uIndex: Int
var vIndex: Int
var uvDataIndex: Int
var pixelBuffer: CVPixelBufferRef? = nil
var err: CVReturn;
if (m_pixelBuffer == nil)
{
err = CVPixelBufferCreate(kCFAllocatorDefault, yuvFrame.width, yuvFrame.height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, nil, &pixelBuffer)
if (err != 0) {
NSLog("Error at CVPixelBufferCreate %d", err)
return nil
}
}
if (pixelBuffer != nil)
{
CVPixelBufferLockBaseAddress(pixelBuffer!, 0)
let yBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer!, 0)
if (yBaseAddress != nil)
{
let yData = UnsafeMutablePointer<UInt8>(yBaseAddress)
let yDataPtr = UnsafePointer<UInt8>(yuvFrame.luma.bytes)
// Y-plane data
memcpy(yData, yDataPtr, yuvFrame.luma.length)
}
let uvBaseAddress = CVPixelBufferGetBaseAddressOfPlane(m_pixelBuffer!, 1)
if (uvBaseAddress != nil)
{
let uvData = UnsafeMutablePointer<UInt8>(uvBaseAddress)
let pUPointer = UnsafePointer<UInt8>(yuvFrame.chromaB.bytes)
let pVPointer = UnsafePointer<UInt8>(yuvFrame.chromaR.bytes)
// For the uv data, we need to interleave them as uvuvuvuv....
let iuvRow = (yuvFrame.chromaB.length*2/yuvFrame.width)
let iHalfWidth = yuvFrame.width/2
for i in 0..<iuvRow
{
for j in 0..<(iHalfWidth)
{
// UV data for original frame. Just interleave them.
uvDataIndex = i*iHalfWidth+j
uIndex = (i*yuvFrame.width) + (j*2)
vIndex = uIndex + 1
uvData[uIndex] = pUPointer[uvDataIndex]
uvData[vIndex] = pVPointer[uvDataIndex]
}
}
}
CVPixelBufferUnlockBaseAddress(pixelBuffer!, 0)
}
return pixelBuffer
}
Note: yuvFrame is a structure with y, u, and v plan buffers and width and height. Also, I have the CFDictionary? parameter in the CVPixelBufferCreate(...) set to nil. If I give it IOSurface attribute, it will fail and complain that it's not IOSurface-backed or error -6683.
Visit these links for more information: This link is about UV interleave: How to convert from YUV to CIImage for iOS
and related question: CVOpenGLESTextureCacheCreateTextureFromImage returns error 6683
Upvotes: 4
Reputation: 502
I figured it out and it was fairly trivial. Here is the full code below. Only issue is that I get a BSXPCMessage received error for message: Connection interrupted
and it takes a while for the video to show.
NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
(__bridge CFDictionaryRef)(pixelAttributes),
&pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPlane, width * height);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
if (result != kCVReturnSuccess) {
DDLogWarn(@"Unable to create cvpixelbuffer %d", result);
}
CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; //success!
CVPixelBufferRelease(pixelBuffer);
I forgot to add the code to interleave the two U and V planes, but that shouldn't be too bad.
Upvotes: 10