tuket
tuket

Reputation: 3941

OpenGL weird line linear filtering

I have programmed the following shader for testing how linear filtering works in OpenGL.

enter image description here

Here we have a 5x1 texture splatted onto a face of a cube (megenta region is just the color of the background).

The texture is this one (it's very small).

enter image description here

The botton-left corner corresponds to uv=(0, 0) and the top-right corresponds to uv=(1, 1). Linear filtering is enabled.

The shaders splits vertically the v coordinate in 5 rows (from top to bottom):

  1. Continuous sampling. Just sample normally.
  2. Green if u is in [0, 1], red otherwise. Just for testing purposes.
  3. The u coordinate in gray scale.
  4. Sampling at the left of the texel.
  5. Sampling at the center of the texel.

The problem is that between 3 and 4 there is a row of one pixel that flickers. The flickering changes by changing the camera distance, and sometimes you can even make it disappear. The problem seems to be in the shader code that handles the fourth row.

// sample at the left of the pixel
// the following line can fix the problem if I add any number different from 0
tc.y += 0.000000;  // replace by any number other than 0 and works fine
tc.x = floor(5 * tc.x) * 0.2;
c = texture(tex0, tc);

This looks weird to me because in that zone the v coordinate is not near any edge of the texture.

Upvotes: 1

Views: 1161

Answers (1)

derhass
derhass

Reputation: 45342

Your code relies on undefined values during the texture fetch.

The GLSL 4.60 specification states in Section 8.9 Texture Functions (emphasis mine):

Some texture functions (non-“Lod” and non-“Grad” versions) may require implicit derivatives. Implicit derivatives are undefined within non-uniform control flow and for non-fragment-shader texture fetches.

While most people think that those derivatives are only required for mip-mapping, that is not correct. The LOD factor is also needed to determine if the texture is magnified or minified (and also for anisotropic filtering in the non-mipmapped case, but that is not of interest here).

GPUs usually approximate the derivatives by finite differencing between neighboring pixels in a 2x2 pixel quad. What's happening is that at the edge between your various options, you have non-uniform control flow where for one line you do the texture filtering, and on the line above, you don't do it. The finite differencing will result in trying to access the texture coords for the texture sampling operation in the upper row, which aren't guaranteed to have been calculated at all, since that shader invocation did not actively execute that code path - this is why the spec treats them as undefined.

Now depending where in the 2x2 pixel quad your edge lies, you do get correct results, or you don't. For the cases you don't get correct results, one possible outcome could be that the GL uses the minification filter which is GL_NEAREST in your example.

It would probably help to just set both filters to GL_LINEAR. However, that would still not be correct code, as the results are still undefined as per the spec.

The only correct solution would be to move the texture sampling out of the non-uniform control flow, like

vec4 c1=texture(tex, tc); // sample directly at tc
vec4 c2=texture(tex, some_function_of(tc)); // sample somewhere else
vec4 c3=texture(tex, ...);

// select output color in some non-uniform way
if (foo) {
   c=c1;
} else if (bar) {
   c=c2;
} else {
   c=c3;
}

Upvotes: 4

Related Questions