Reputation: 21
I'm trying to do image processing using shaders in GLSL for good performances and portability.
But I have multiple steps to transform the image and each step need the information of the previous step.
For example, I want to blur the image so I need the informations of the pixels surrounding each one to average them, that is not a problem, I just use texture2D(u_texture, v_texCoords);
and after the processing I have a vec4 blurred
Then, after the blur, I want to do an edge detection on the image previously blurred, but I can't do it using vec4 blurred
because it does not give me access to the surrounding pixels. And if I use texture2D(u_texture, v_texCoords);
again, I do the process on the first image and not on the blurred image.
In other worlds, after each step of image processing I want to have access to all the pixels of the previous step.
(I am using java with libgdx and shaders in GLSL)
Thank you.
Upvotes: 1
Views: 139
Reputation: 6766
The standard approach to something like this is to 'ping-pong' between two render-to-texture (RTT) buffers. For example, in the case of a blur followed by an edge detection you might do the following render passes:
The same principle can be extended to chain together any number of post-processing effects, continuously bouncing between the two buffers.
Note that in my illustration I'm splitting the blur into two passes, taking advantage of the fact that blurs are a separable effect. It's often much more efficient to handle large blur kernels by separating the horizontal and vertical passes, and you end up with the same result.
Upvotes: 3