paulinodjm
paulinodjm

Reputation: 235

WebGL - two pass rendering

I have an usage question about webGL.

Recently, I had to render in real-time a post-processed image from a given geometry.

My idee was:

  1. the geometry is projected on the screen by the vertex shader
  2. a first fragment shader is used to render this geometry offscreen
  3. a second fragment shader post-process the offscreen image, and display the result on the canvas.

How I implemented it:

I wrote a first set of two shaders for the offscreen render. It serves me to draw the geometry to a texture, using a framebuffer.

For the second part, I created a second glsl program. Here, the vertex shader is used to project a rectangle that covers the whole screen. The fragment shader picks the appropriate pixel from the offscreen texture using a sample2D, and do all its post-process stuff.

This sounds weird to me, for two thinks:

  1. In order to be 'renderable', the offscreen texture has to be created with a size power of two, and thus can be significantly larger than the canvas itself.
  2. Using a second vertex shader seems redundant. Is it possible to skip this step, and directly go to the second fragment shader, to draw the offscreen texture to the canvas?

So, the big question is: what is the proper way to achieve this? What am I doing right, and what am I doing wrong?

Thank you for your advice :)

Upvotes: 2

Views: 2070

Answers (1)

LJᛃ
LJᛃ

Reputation: 8123

In order to be 'renderable', the offscreen texture has to be created with a size power of two, and thus can be significantly larger than the canvas itself.

No it does not, it only needs to when you require mip mapped filtering, creating and rendering to NPOT(non power of two) textures with LINEAR or NEAREST filters is totally fine. Note that NPOT textures only support CLAMP_TO_EDGE wrapping.

Using a second vertex shader seems redundant. Is it possible to skip this step, and directly go to the second fragment shader, to draw the offscreen texture to the canvas?

Unfortunately not, you could use one and the same vertex shader for both render passes by simply attaching it to both programs. However this would require your vertex shader logic to apply to both geometries which is rather unlikely + you're switching programs anyway so there is nothing to gain here.

Upvotes: 2

Related Questions