agrum
agrum

Reputation: 397

How to read full range of a 32 bits integer texture in GLSL

I successfully upload and download data to an integer texture with R32UI as internal format. I write to a texture 1000x600. I affect to each pixel a unique value (x + y*height). When reading back the texture, the values are correct.

The problem is on the shader/rendering side. When using 'texture()' on the 'usampler2D', the range [0 - 255] is extended to [0 - 2^32-1]. It looks like a modulo 256 is applied on the texel value then divided by 256 then multipled by 2^32-1.

The visual:

enter image description here

The fragment shader

#version 430

in vec2 gTextCoord;

layout(binding = 0) uniform usampler2D uInputImg;

out vec4 FragColor;

void main() 
{ 
    FragColor = vec4(vec3(texture(uInputImg, gTextCoord).rgb) / 4294967295.0, 1);
}

The upload:

int id = GL11.glGenTextures();
GL11.glBindTexture(GL11.GL_TEXTURE_2D, id);
GL11.glTexImage2D(
    GL11.GL_TEXTURE_2D, 
    0, 
    GL30.GL_R32UI, 
    width, 
    height, 
    0, 
    GL30.GL_RED_INTEGER, 
    GL11.GL_UNSIGNED_INT, 
    data);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, 0);

The binding:

GL13.glActiveTexture(GL13.GL_TEXTURE0 + binding);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, id);

I would like to get the value I affected to each texel. Is it possible ? If so, what am I missing ? If not, what is blocking and what are the alternatives ?

Thanks.

Upvotes: 1

Views: 1565

Answers (1)

Jerem
Jerem

Reputation: 1862

I think the problem is float precision.

Basically you use a very big float (4 billion) in a division and expect a very small one (0-1) as a result. But the big float is so imprecise compared to the range (0-1) that the result you get is just noise.

What you need to do is get those two ranges closer. You can either use a 16 bits int texture instead or you can divide the texture sample with an integer division before converting it to float (and then divide by a smaller float).

Upvotes: 1

Related Questions