Reputation: 1110
I'm trying to render the same scene (consisting of a large number of instanced meshes) after setting glDepthFunc
to GL_LESS
and GL_GREATER
, to get the first and last hitpoints of the overall scene. The shader program run is identical in both cases, with the only difference between the two executions being the setting for glDepthFunc
, i.e.:
while (true) {
glDepthFunc(GL_LESS);
runShader(buffer1);
glDepthFunc(GL_GREATER);
runShader(buffer2);
}
The shader consists of only a vertex shader and a fragment shader. The fragment shader output will definitely be different for the two shader program executions, but the vertex shader output should be identical between the two executions. Is there a way to get OpenGL to reuse the output from the first vertex shader in the second shader program execution, so that only the fragment shader has to run the second time?
Upvotes: 0
Views: 66
Reputation: 22165
Short answer: That's not possible.
Long answer: It's not directly possible to reuse the vertex shader results. There are some approaches that could ensure that a specific vertex shader doesn't have to run twice, but unless the vertex shader is very heavy, I wouldn't expect any of those methods to be faster than just rendering the scene twice.
Option 1: Transform Feedback
One could use transform feedbacks and write the results of the vertex shader into an additional VBO. Then both render stages could use the transform buffer as input for the rendering. This approach still requires a pass-through vertex shader in the render paths and will very likely lead to a worse performance unless the first vertex shader is so slow that it compensates for the additional overhead of transform feedbacks + one additional render call.
Option 2: Atomic operations
Instead regular rendering with depth testing, one could use atomicMin
and atomicMax
in the fragment shader to calculate the min/max depth for each fragment. The approach requires two uint image texture (one for min, one for max) and you'd have to convert from floating point depth values to a fixed point representation (uint) yourself. Again, Its not unlikely that the additional overhead of loosing early depth testing and using atomic operations in the fragment shader will be slower than just rendering the whole scene twice.
Upvotes: 3