Stacy Kraemer
Stacy Kraemer

Reputation: 11

Why the need for offscreen buffers in Brad Larson's GPUImage framewok when outputting to file

Brad, I saw in your GPUImage framework that you have an offscreen frame and render buffer called movieFrameBuffer and movieRenderBuffer defined in GPUImageMovieWriter.m file. What is there a need to declare offscreen framebuffers. Can't you use the buffers defined in GPUImageView.m to grab the pixels? Is 720P and 1080P support the reason?

Upvotes: 1

Views: 503

Answers (1)

Brad Larson
Brad Larson

Reputation: 170309

While this might be better asked on the GitHub project page, or on my forum, there's one interesting reason why I do this and I thought I'd clarify that.

When testing AVAssetWriter, I found that using BGRA frames dramatically increased encoding performance. Therefore, when grabbing frames using glReadPixels(), I needed to apply a color-swizzling shader to the incoming filtered frames in order for them to be read out in the BGRA color format. This is rendered using that offscreen framebuffer.

On iOS 5.0, I use the texture caches to avoid needing to use glReadPixels(). Because the internal texture color format is BGRA on the iOS devices, no color swizzling is needed. However, I still run these filtered frames through a simple passthrough shader and output them to the offscreen FBO in case the filter chain up to that point was being rendered in a different resolution. This allows you to have movies being recorded at one resolution and display or other actions taking place at another.

As one optimization, I'm looking to eliminate the passthrough render step for the cases when the input image size matches the output movie encoding size. That will require a little work on the filter architecture, so it might not happen for a while.

Upvotes: 1

Related Questions