jjxtra
jjxtra

Reputation: 21180

Fastest way on iOS 7+ to get CVPixelBufferRef from BGRA bytes

What is the fastest way on iOS 7+ to convert raw bytes of BGRA / UIImage data to a CVPixelBufferRef? The bytes are 4 bytes per pixel in BGRA order.

Is there any chance of a direct cast here vs. copying data into a secondary storage?

I've considered CVPixelBufferCreateWithBytes but I have a hunch it is copying memory...

Upvotes: 1

Views: 1338

Answers (1)

jjxtra
jjxtra

Reputation: 21180

You have to use CVPixelBufferCreate because CVPixelBufferCreateWithBytes will not allow fast conversion to an OpenGL texture using the Core Video texture cache. I'm not sure why this is the case, but that's the way things are at least as of iOS 8. I tested this with the profiler, and CVPixelBufferCreateWithBytes causes a texSubImage2D call to be made every time a Core Video texture is accessed from the cache.

CVPixelBufferCreate will do funny things if the width is not a multiple of 16, so if you plan on doing CPU operations on CVPixelBufferGetBaseAddress, and you want it to be laid out like a CGImage or CGBitmapContext, you will need to pad your width higher until it is a multiple of 16, or make sure you use the CVPixelBufferGetRowBytes and pass that to any CGBitmapContext you create.

I tested all combinations of dimensions of width and height from 16 to 2048, and as long as they were padded to the next highest multiple of 16, the memory was laid out properly.

+ (NSInteger) alignmentForPixelBufferDimension:(NSInteger)dim
{
    static const NSInteger modValue = 16;
    NSInteger mod = dim % modValue;
    return (mod == 0 ? dim : (dim + (modValue - mod)));
}

+ (NSDictionary*) pixelBufferSurfaceAttributesOfSize:(CGSize)size
{
    return @{ (NSString*)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA),
              (NSString*)kCVPixelBufferWidthKey: @(size.width),
              (NSString*)kCVPixelBufferHeightKey: @(size.height),
              (NSString*)kCVPixelBufferBytesPerRowAlignmentKey: @(size.width * 4),
              (NSString*)kCVPixelBufferExtendedPixelsLeftKey: @(0),
              (NSString*)kCVPixelBufferExtendedPixelsRightKey: @(0),
              (NSString*)kCVPixelBufferExtendedPixelsTopKey: @(0),
              (NSString*)kCVPixelBufferExtendedPixelsBottomKey: @(0),
              (NSString*)kCVPixelBufferPlaneAlignmentKey: @(0),
              (NSString*)kCVPixelBufferCGImageCompatibilityKey: @(YES),
              (NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey: @(YES),
              (NSString*)kCVPixelBufferOpenGLESCompatibilityKey: @(YES),
              (NSString*)kCVPixelBufferIOSurfacePropertiesKey: @{ @"IOSurfaceCGBitmapContextCompatibility": @(YES), @"IOSurfaceOpenGLESFBOCompatibility": @(YES), @"IOSurfaceOpenGLESTextureCompatibility": @(YES) } };
}

Interestingly enough, if you ask for a texture from the Core Video cache with dimensions smaller than the padded dimensions, it will return a texture immediately. Somehow underneath it is able to reference the original texture, but with a smaller width and height.

To sum up, you cannot wrap existing memory with a CVPixelBufferRef using CVPixelBufferCreateWithBytes and use the Core Video texture cache efficiently. You must use CVPixelBufferCreate and use CVPixelBufferGetBaseAddress.

Upvotes: 4

Related Questions