Reputation: 461
I use opengl es to display bgr24 data on iPad, I am new about opengl es ,so in display video part I use code from RosyWriter one APPLE sample. It works, but the CVOpenGLESTextureCacheCreateTextureFromImage function cost more than 30ms, while in RosyWriter its cost is negligible. what I do is first transform BGR24 to BGRA pixel format, then use CVPixelBufferCreateWithBytes function create a CVPixelBufferRef, and then get a CVOpenGLESTextureRef by CVOpenGLESTextureCacheCreateTextureFromImage. My codes as following,
- (void)transformBGRToBGRA:(const UInt8 *)pict width:(int)width height:(int)height
{
rgb.data = (void *)pict;
vImage_Error error = vImageConvert_RGB888toARGB8888(&rgb,NULL,0,&argb,NO,kvImageNoFlags);
if (error != kvImageNoError) {
NSLog(@"vImageConvert_RGB888toARGB8888 error");
}
const uint8_t permuteMap[4] = {1,2,3,0};
error = vImagePermuteChannels_ARGB8888(&argb,&bgra,permuteMap,kvImageNoFlags);
if (error != kvImageNoError) {
NSLog(@"vImagePermuteChannels_ARGB8888 error");
}
free((void *)pict);
}
and after transform, will generate CVPixelBufferRef, codes as following,
[self transformBGRToBGRA:pict width:width height:height];
CVPixelBufferRef pixelBuffer;
CVReturn err = CVPixelBufferCreateWithBytes(NULL,
width,
height,
kCVPixelFormatType_32BGRA,
(void*)bgraData,
bytesByRow,
NULL,
0,
NULL,
&pixelBuffer);
if(!pixelBuffer || err)
{
NSLog(@"CVPixelBufferCreateWithBytes failed (error: %d)", err);
return;
}
CVOpenGLESTextureRef texture = NULL;
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
width,
height,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
if (!texture || err) {
NSLog(@"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
CVPixelBufferRelease(pixelBuffer);
return;
}
The other codes is almost similar RosyWriter sample, include shaders. So I want to know why, how to fix this problem.
Upvotes: 4
Views: 5791
Reputation: 461
With my research in these day, I find why CVOpenGLESTextureCacheCreateTextureFromImage
cost much time, when the data is big, here is 3M, the allocation, copy and move operation which cost is considerable, especially Copy operation. Then with pixel buffer pool greatly improve performance of CVOpenGLESTextureCacheCreateTextureFromImage
from 30ms to 5ms, the same level with glTexImage2D(). My solution as following:
NSMutableDictionary* attributes;
attributes = [NSMutableDictionary dictionary];
[attributes setObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
[attributes setObject:[NSNumber numberWithInt:videoWidth] forKey: (NSString*)kCVPixelBufferWidthKey];
[attributes setObject:[NSNumber numberWithInt:videoHeight] forKey: (NSString*)kCVPixelBufferHeightKey];
CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (CFDictionaryRef) attributes, &bufferPool);
CVPixelBufferPoolCreatePixelBuffer (NULL,bufferPool,&pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer,0);
UInt8 * baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer);
memcpy(baseAddress, bgraData, bytesByRow * videoHeight);
CVPixelBufferUnlockBaseAddress(pixelBuffer,0);
with this new created pixelBuffer you can make it fast.
Add following configures to attribtes can make its performance to the best, less than 1ms.
NSDictionary *IOSurfaceProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], @"IOSurfaceOpenGLESFBOCompatibility",[NSNumber numberWithBool:YES], @"IOSurfaceOpenGLESTextureCompatibility",nil];
[attributes setObject:IOSurfaceProperties forKey:(NSString*)kCVPixelBufferIOSurfacePropertiesKey];
Upvotes: 5