Tony Ruth
Tony Ruth

Reputation: 1408

Android OpenGL ES 2.0 reuse previous frame

I am looking for a minimal example where each frame will reuse the triangles from the previous frame. I have been trying over and over unsuccessfully, so I do not have any code worth showing, although I do have a program that works by drawing to the default framebuffer.

During on draw I would add new triangles to a framebuffer object, and the frame would be copied to the default framebuffer. Since I would not clear the frame buffer object, it would retain its rgba values and depths, so that when I add more triangles the next frame, the previous ones would still remain. (Later I will reduce the alpha value of the triangles from previous frames to produce a fade effect, but for simplicity sake reuse the previous triangles exactly as they are is fine.)

I am finding it very difficult to understand how the frame buffer object works, and whether or not I need to create render, depth, and texture buffers. I suspect that I need render and depth buffers since I would like to retain that information between draws but do not need the texture buffer.

I thought the onDrawFrame method would look something like this:

  1. New data is added to the framebuffer object.
  2. The default framebuffer is cleared.
  3. The information from the framebuffer object is copied to the default framebuffer and then the default framebuffer is rendered.

I believe I am doing steps 1 and 2 correctly by binding the framebuffer object, the renderbuffer, and the depth buffer, but I cannot figure out a means of copying from one framebuffer to another.

Upvotes: 1

Views: 2095

Answers (2)

Reto Koradi
Reto Koradi

Reputation: 54592

For step 3, you use the texture you rendered to (the one that is used as the FBO color attachment), and sample from it while drawing a screen size quad. You can use very simple shaders for that. The vertex shader for the copy will look something like this:

#version 100
attribute vec2 Pos;
varying vec2 TexCoord;
void main() {
    TexCoord = 0.5 * Pos + 0.5;
    gl_Position = vec4(Pos, 0.0, 1.0);
}

and the fragment shader:

#version 100
uniform sampler2D Tex;
varying vec2 TexCoord;
void main() {
    gl_FragColor = texture2D(Tex, TexCoord);
}

Then you draw a quad that covers the range [-1.0, 1.0] in both x and y.

There's another option. Unfortunately it's not portable, but will work on some devices. The following is copied from my own recent answer here: Fast Screen Flicker while NOT drawing on Android OpenGL.

For this approach, you call:

eglSurfaceAttrib(display, surface, EGL_SWAP_BEHAVIOR, EGL_BUFFER_PRESERVED);

while setting up the context/surface. This requests that the buffer content is preserved after eglSwapBuffers(). It comes with a big caveat, though: This is not supported on all devices. You can test if it's supported with:

EGLint surfType = 0;
eglGetConfigAttrib(display, config, EGL_SURFACE_TYPE, &surfType);
if (surfType & EGL_SWAP_BEHAVIOR_PRESERVED_BIT) {
    // supported!
}

You can also request this functionality as part of choosing the config. As part of the attributes passed to eglChooseConfig(), add:

...
EGL_SURFACE_TYPE, EGL_WINDOW_BIT | EGL_SWAP_BEHAVIOR_PRESERVED_BIT,
...

But again, this is not supported on all devices. So it's only really an option if you're targeting specific devices, or have a functional fallback if it's not supported.

Upvotes: 2

solidpixel
solidpixel

Reputation: 12069

A framebuffer object is really just a meta-object which is a container for the surfaces it contains (either textures or renderbuffers). You will need to create the color/depth/stencil you need and "attach" them to the relevant attachment points in the framebuffer object.

In terms of copying from one surface to another, you either can use glBlitFramebuffer, or just render a 2D quad loading the offscreen surface as a texture and set up the texture coordinates so it is a 1:1 copy.

Note that "rendering over the top of what is already in memory" is relatively expensive on most mobile GPUs which are tilebased (they must read the old state into a GPU-local memory), especially if you then need a separate copy to blit the offscreen buffer into the on-screen one. I would suggest profiling to make sure that this scheme really is faster than just re-rendering, as it sounds like it may not be.

Upvotes: 1

Related Questions