Reputation: 20163
I've been able to use AVFoundation's AVAssetReader
class to upload video frames into an OpenGL ES texture. It has a caveat, however, in that it fails when used with an AVURLAsset
that points to remote media. This failure isn't well documented, and I'm wondering if there's any way around the shortcoming.
Upvotes: 22
Views: 9846
Reputation: 325
As of 2023, CGLTexImageIOSurface2D
is much faster than CVOpenGLESTextureCacheCreateTextureFromImage()
for getting CVPixelData into an OpenGL texture.
Ensure the CVPixelBuffers are IOSurface backed and in the right format:
videoOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:@{
(id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA),
(id)kCVPixelBufferIOSurfacePropertiesKey: @{},
}];
videoOutput.suppressesPlayerRendering = YES;
Glint texture;
glGenTextures(1, & texture);
Then to get each frame
pixelBuffer = [output copyPixelBufferForItemTime:currentTime itemTimeForDisplay:NULL];
if (NULL == pixelBuffer) return;
let surface = CVPixelBufferGetIOSurface(pixelBuffer);
if (!surface) return;
glBindTexture(GL_TEXTURE_RECTANGLE, texture);
CGLError error = CGLTexImageIOSurface2D(CGLGetCurrentContext(),
GL_TEXTURE_RECTANGLE,
GL_RGBA,
(int)IOSurfaceGetWidthOfPlane(surface, 0),
(int)IOSurfaceGetHeightOfPlane(surface, 0),
GL_BGRA,
GL_UNSIGNED_INT_8_8_8_8_REV,
surface,
0);
Upvotes: 3
Reputation: 20163
There's some API that was released with iOS 6 that I've been able to use to make the process a breeze. It doesn't use AVAssetReader
at all, and instead relies on a class called AVPlayerItemVideoOutput
. An instance of this class can be added to any AVPlayerItem
instance via a new -addOutput:
method.
Unlike the AVAssetReader
, this class will work fine for AVPlayerItem
s that are backed by a remote AVURLAsset
, and also has the benefit of allowing for a more sophisticated playback interface that supports non-linear playback via -copyPixelBufferForItemTime:itemTimeForDisplay:
(instead of of AVAssetReader
's severely limiting -copyNextSampleBuffer
method.
// Initialize the AVFoundation state
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:someUrl options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler:^{
NSError* error = nil;
AVKeyValueStatus status = [asset statusOfValueForKey:@"tracks" error:&error];
if (status == AVKeyValueStatusLoaded)
{
NSDictionary* settings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] };
AVPlayerItemVideoOutput* output = [[[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:settings] autorelease];
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:asset];
[playerItem addOutput:[self playerItemOutput]];
AVPlayer* player = [AVPlayer playerWithPlayerItem:playerItem];
// Assume some instance variable exist here. You'll need them to control the
// playback of the video (via the AVPlayer), and to copy sample buffers (via the AVPlayerItemVideoOutput).
[self setPlayer:player];
[self setPlayerItem:playerItem];
[self setOutput:output];
}
else
{
NSLog(@"%@ Failed to load the tracks.", self);
}
}];
// Now at any later point in time, you can get a pixel buffer
// that corresponds to the current AVPlayer state like this:
CVPixelBufferRef buffer = [[self output] copyPixelBufferForItemTime:[[self playerItem] currentTime] itemTimeForDisplay:nil];
Once you've got your buffer, you can upload it to OpenGL however you want. I recommend the horribly documented CVOpenGLESTextureCacheCreateTextureFromImage()
function, because you'll get hardware acceleration on all the newer devices, which is much faster than glTexSubImage2D()
. See Apple's GLCameraRipple and RosyWriter demos for examples.
Upvotes: 31