Translunar
Translunar

Reputation: 3816

OpenGL: depth calculations are discontinuous

I'm building a LIDAR simulator in OpenGL. This means that the fragment shader returns the length of the light vector (the distance) in place of one of the color channels, normalized by the distance to the far plane (so it'll be between 0 and 1). In other words, I use red to indicate light intensity and blue to indicate distance; and I set green to 0. Alpha is unused, but I keep it at 1.

Here's my test object, which happens to be a rock:

enter image description here

I then write the pixel data to a file and load it into a point cloud visualizer (one point per pixel) — basically the default. When I do that, it becomes clear that all of my points are in discrete planes each located at a different depth:

Point cloud visualization

I tried plotting the same data in R. It doesn't show up initially with the default histogram because the density of the planes is pretty high. But when I set the breaks to about 60, I get this:

Density plot of distances (about 60 breaks).

I've tried shrinking the distance between the near and far planes, in case it was a precision issue. First I was doing 1–1000, and now I'm at 1–500. It may have decreased the distance between planes, but I can't tell, because it means the camera has to be closer to the object.

Is there something I'm missing? Does this have to do with the fact that I disabled anti-aliasing? (Anti-aliasing was causing even worse periodic artifacts, but between the camera and the object instead. I disabled line smoothing, polygon smoothing, and multisampling, and that took care of that particular problem.)

Edit

These are the two places the distance calculation is performed:

Is it because I'm calculating ec_pos in the vertex shader? How can I calculate ec_pos in the fragment shader instead?

Upvotes: 4

Views: 604

Answers (1)

geometrian
geometrian

Reputation: 15387

There are several possible issues I can think of.

(1) Your depth precision. The far plane has very little effect on resolution; the near plane is what's important. See Learning to Love your Z-Buffer.

(2) The more probable explanation, based on what you've provided, is the conversion/saving of the pixel data. The shader outputs floating point values, but these are stored in the framebuffer, which will typically have only 8bits per channel. For color, what that means is that your floats will be mapped to the underlying 8-bit (fixed width, integer) representation, therefore only possessing 256 values.

If you want to output pixel data as the true floats they are, you should make a 32-bit floating point RGBA FBO (with e.g. GL_RGBA32F or something similar). This will store actual floats. Then, when your data from the GPU, it will return the original shader values.

I suppose you could alternately encode a single float in a vec4 with some multiplication, if you don't have a FBO implementation handy.

Upvotes: 4

Related Questions