Reputation: 512
I am using FBO and rendering to a texture. Here's my code :
GLuint FramebufferName = 0;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
GLuint renderedTexture;
glGenTextures(1, &renderedTexture);
glBindTexture(GL_TEXTURE_2D, renderedTexture);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB10_A2UI,256,256,0,GL_RGBA_INTEGER,GL_UNSIGNED_INT_10_10_10_2,0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
GLuint color_buffer;
glGenRenderbuffers(1, &color_buffer);
glBindRenderbuffer(GL_RENDERBUFFER, color_buffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB10_A2UI, 256, 256);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, color_buffer);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,renderedTexture, 0);
GLenum DrawBuffers[2] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, DrawBuffers);
GLuint textureId;
LoadImage(textureId); // Loads texture data into textureId
Render(); // Renders textureId onto FramebufferName
glBindFramebuffer(GL_FRAMEBUFFER, 0); // Bind default FBO
glBindTexture(GL_TEXTURE_2D, renderedTexture); //Render using renderedTexture
glDrawArrays (GL_TRIANGLE_FAN,0, 4);
The output is incorrect. The image is not rendered correctly. If I use format GL_RGBA instead of GL_RGB10_A2UI everything goes fine. The FBO is GL_FRAMEBUFFER_COMPLETE ,no issues there. Am I doing something wrong here ?
My fragment shader for GL_RGB10_A2UI is :
in vec2 texcoord;
uniform usampler2D basetexture;
out vec4 Color;
void main(void)
{
uvec4 IntColor = texture(basetexture, texcoord);
Color = vec4(IntColor.rgb, 1023.0) / 1023.0;
}
For GL_RGBA I am not doing normalization in shader.
Upvotes: 0
Views: 931
Reputation: 473322
If I use format GL_RGBA instead of GL_RGB10_A2UI everything goes fine.
If that's true, then it means your shader is not writing integers.
We've discussed this before, but you don't really seem to understand something. An integer texture is a very different thing from a floating-point texture or a normalized integer texture.
There is no circumstance where a GL_RGBA8
texture and a GL_RGB10_A2UI
texture would both work with the same code. The same shader cannot read from a texture that could be normalized or integral texture. The same shader cannot write to a buffer that could be normalized or integral. The same pixel transfer code cannot write to or read from an image that could be normalized or integral. They are in every way different entities, which require different parameters to access them.
Furthermore, even if a shader could write to either one, what would it be writing? Integer textures take integers; if you attempt to stick a floating-point value on the range [0, 1] into an integer, it will either come out as 0 or 1. And if you try to put an integer in the [0, 1] range, you will get 0 if your integer was zero, and 1 otherwise. So whatever your fragment shader is doing is very confused.
Odds are very good that you really should be using GL_RGB10_A2
, despite your belief that you really did mean to use GL_RGB10_A2UI
. If you really meant to be writing integers, your shader would be writing integers and not floats, and therefore your shader would not have "worked" with GL_RGBA8
.
However, if you really, truly want to use an unsigned integral texture, and you really, truly understand what that means and how it is different from GL_RGB10_A2
, then here are the things you have to do:
Any fragment shader that intends to write to an integer texture must write to an integer output variable. And the signed/unsigned state of that output must match the destination image's signed/unsigned format. So for your GL_RGB10_A2UI
, an unsigned integer format, you must be writing to a uvec4
. You should be writing integers to this value, not floating-point values.
Any shader that intends to read from an integer texture must use an integer sampler uniform. And that sampler uniform must match the signed/unsigned format of the image. So for your GL_RGB10_A2UI
, you must
Pixel transfer operations must explicit use the _INTEGER
pixel transfer formats.
Upvotes: 1