Reputation: 694
I am beginning to add deferred shading to my game engine, I am having some trouble understanding how to retrieve eye space coordinates from a depth texture.
I have read about two methods in this post and by my understanding:
GL_DEPTH_ATTACHMENT
to my Framebuffer.GL_COLOR_ATTACHMENTn
.The reconstruction for this second method seems too complicated for what I need, so I would rather not use that if possible.
For reconstructing eye coordinates, the first method takes the texture coordinates, depth, and a w
of 1.0
in a vec4
, multiplies that by the inverse perspective matrix, then divides the result by its w
.
Will this method work unaltered with the depth stored in my GL_DEPTH_ATTACHMENT
texture, and will it be accurate enough to do lighting on?
If it won't work unaltered, then what do I need to do with my depth to make it work?
Upvotes: 1
Views: 1037
Reputation: 43329
The reason you might calculate the depth in the shader and store it in a color buffer is that the depth buffer is non-linear. Due to perspective correction, you have to do extra things to a sampled depth value in order to get linear depth (which you want for view/world space position reconstruction).
It is tempting to write a linear value to gl_FragDepth
as a solution, and I have seen some tutorials even do this... but do not do this! It destroys modern hardware depth buffer optimizations like hierarchical z-buffering and early depth testing.
The second approach that you pointed out is not only needlessly complicated, it is also horribly inefficient. You are better off simply linearizing the non-linear depth buffer during reconstruction. A few extra fragment shader instructions should be quicker than writing the depth to two locations (implicit write to the depth buffer, plus one to a dedicated color buffer) during your G-Buffer creation.
Upvotes: 1