Reputation: 9632
Is there a way to render monochromatically to a frame buffer in OpenGL?
My end goal is to render to a Cubic texture to create shadow maps for shading in my application.
From what I understand a way to do this would be, for each light source, render the scene 6 times (using the 6 possible orthogonal orientations for the camera) to an FBO each, then add all of them to the cube map.
I alrady have the shaders that render the depth map for one such camera position. However, these shaders render in full RGB, which, for a depth map, is 3 times bigger than it needs to be. Is there a way to render monochromatically so as to reduce the size of the textures?
Upvotes: 0
Views: 440
Reputation: 393
How do you create texture[s] for your shadowmap (or cubemap)? If you use GL_DEPTH_COMPONENT[16|24|32]
format while creating texture then the texture will be single channel as you want.
Check official documentation: https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/glTexImage2D.xml
GL_DEPTH_COMPONENT
Each element is a single depth value. The GL converts it to floating point, multiplies by the signed scale factor GL_DEPTH_SCALE, adds the signed bias GL_DEPTH_BIAS, and clamps to the range [0,1] (see glPixelTransfer).
As you can see it says each element is SINGLE depth value.
So if you use something like this:
for (i = 0; i < 6; i++)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i,
0,
GL_DEPTH_COMPONENT24,
size,
size,
0,
GL_DEPTH_COMPONENT,
GL_FLOAT,
NULL);
single element size must be 24-bit (maybe 32 with padding). Otherwise it would be ridiculous to specify depth size if it will store them as RGB[A].
This post also validates that depth texture is single channel texture: https://www.opengl.org/discussion_boards/showthread.php/123939-How-is-data-stored-in-GL_DEPTH_COMPONENT24-texture
"I alrady have the shaders that render the depth map for one such camera position. However, these shaders render in full RGB, which, for a depth map, is 3 times bigger than it needs to be."
In general you render scene to shadowmap to get depth value (or distance), right? Then why do you render as RGB anyway? If you only need to depth values, you don't need to color attachments because you don't need to write them, you only write to depth buffer (actually OpenGL itself do this if you are not overriding its values in frag)
Upvotes: 1