Reputation:
I'm using defered rendering in my application and i'm trying to create a texture that will contain both the depth and the stencil.
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, width, height, 0,
???, GL_FLOAT, 0);
Now what format enum does opengl want for this particular texture. I tried a couple and got error for all of them
Also, what is the correct glsl syntax to access the depth and stencil part of the texture. I understand that depth texture are usually uniform sampler2Dshadow. But do I do
float depth = texture(depthstenciltex,uv).r;// <- first bit ? all 32 bit ? 24 bit ?
float stencil = texture(depthstenciltex,uv).a;
Upvotes: 6
Views: 7169
Reputation: 43319
Now what format enum does opengl want for this particular texture.
The problem you are running into is that Depth+Stencil is a totally oddball combination of data. The first 24-bits (depth) are fixed-point and the remaining 8-bits (stencil) are unsigned integer. This requires a special packed data type: GL_UNSIGNED_INT_24_8
Also, what is the correct glsl syntax to access the depth and stencil part of the texture. I understand that depth texture are usually uniform sampler2Dshadow.
You will actually never be able to sample both of those things using the same sampler uniform and here is why:
OpenGL Shading Language 4.50 Specification - 8.9 Texture Functions - p. 158
For depth/stencil textures, the sampler type should match the component being accessed as set through the OpenGL API. When the depth/stencil texture mode is set to
GL_DEPTH_COMPONENT
, a floating-point sampler type should be used. When the depth/stencil texture mode is set toGL_STENCIL_INDEX
, an unsigned integer sampler type should be used. Doing a texture lookup with an unsupported combination will return undefined values.
This means if you want to use both the depth and stencil in a shader you are going to have to use texture views (OpenGL 4.2+) and bind those texture to two different Texture Image Units (each view have a different state for GL_DEPTH_STENCIL_TEXTURE_MODE
). Both of these things together mean you are going to need at least an OpenGL 4.4 implementation.
#version 440
// Sampling the stencil index of a depth+stencil texture became core in OpenGL 4.4
layout (binding=0) uniform sampler2D depth_tex;
layout (binding=1) uniform usampler2D stencil_tex;
in vec2 uv;
void main (void) {
float depth = texture (depth_tex, uv);
uint stencil = texture (stencil_tex, uv);
}
// Alternate view of the image data in `depth_stencil_texture`
GLuint stencil_view;
glGenTextures (&stencil_view, 1);
glTextureView (stencil_view, GL_TEXTURE_2D, depth_stencil_tex,
GL_DEPTH24_STENCIL8, 0, 1, 0, 1);
// ^^^ This requires `depth_stencil_tex` be allocated using `glTexStorage2D (...)`
// to satisfy `GL_TEXTURE_IMMUTABLE_FORMAT` == `GL_TRUE`
// Texture Image Unit 0 will treat it as a depth texture
glActiveTexture (GL_TEXTURE0);
glBindTexture (GL_TEXTURE_2D, depth_stencil_tex);
glTexParameteri (GL_TEXTURE_2D, GL_DEPTH_STENCIL_TEXTURE_MODE, GL_DEPTH_COMPONENT);
// Texture Image Unit 1 will treat the stencil view of depth_stencil_tex accordingly
glActiveTexture (GL_TEXTURE1);
glBindTexture (GL_TEXTURE_2D, stencil_view);
glTexParameteri (GL_TEXTURE_2D, GL_DEPTH_STENCIL_TEXTURE_MODE, GL_STENCIL_INDEX);
Upvotes: 15
Reputation:
nvm found it
glTexImage2D(gl.TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, w,h,0,GL_DEPTH_STENCIL,
GL_UNSIGNED_INT_24_8, 0);
uint24_8 was my problem.
usage in glsl (330):
sampler2D depthstenciltex;
...
float depth = texture(depthstenciltex,uv).r;//access the 24 first bit,
//transformed between [0-1]
Upvotes: 2