svenevs
svenevs

Reputation: 863

displaying 16bit unsigned integers in opengl

I would like a simple way of achieving this, but seem to be bjorking the parameters to glTexImage2D. I have an std::vector<uint16_t> depth_buffer that, on a frame-by-frame basis has depth measurements coming from a kinect. There are exactly 640 x 480 of them, one depth measurement per pixel. If the world went my way, the call should be

glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16, 640, 480, 0, GL_LUMINANCE16, GL_UNSIGNED_SHORT, depth_buffer.data());

Where internalFormat (third parameter) is GL_LUMINANCE16 because they are 16 bit unsigned integers, and format is the same because that is exactly how the data is coming in. The type parameter should be GL_UNSIGNED_SHORT...because these are shorts and not bytes.

Surprisingly, if I change it to be

glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16, 640, 480, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, depth_buffer.data());

where internalFormat is still GL_LUMINANCE16, format is just GL_LUMINANCE without the 16, and type is GL_UNSIGNED_BYTE, then I get something. Things are clearly being skipped, but just changing to GL_UNSIGNED_SHORT doesn't cut it.

Depending on which documentation you read, format (the second GL_LUMINANCE) may or may not allow the 16 after it (anybody know why? experimentation seems to confirm this). But my chief concern here is why GL_UNSIGNED_**SHORT** seems to be invalid (either all black or all white) depending on the internalFormat -- format combination.

I've tried an obscene amount of combinations here, and am looking for the right approach. Anybody have some advice for achieving this? I'm not opposed to using fbo's, but would really like to avoid it if possible...since it definitely should be doable.

Upvotes: 2

Views: 1841

Answers (2)

Dietrich Epp
Dietrich Epp

Reputation: 213338

I wouldn't bother with GL_LUMINANCE, it's an obsolete feature from old versions of OpenGL (no, seriously, don't use it). In a modern setting, you would use:

  • Internal format GL_R16. All this means is "one channel, 16 bits, normalized".

  • Format GL_RED. (Formats are not sized, so GL_LUMINANCE16 is illegal here, and GL_R16 is also illegal.)

  • Type GL_UNSIGNED_SHORT.

Upvotes: 4

datenwolf
datenwolf

Reputation: 162164

The (second) format parameter is only for telling what is contained in the data, not how it's laid out. Therefore GL_LUMINANCE16 is an invalid token to pass to the format parameter (it's allowed only for the internalformat parameter).

The layout from which the data shall be unpacked is controlled by the type parameter to glTexImage and the pixel store settings for unpacking set with glPixelStorei for the GL_UNPACK_… parameters. Most likely your "skipping" is due to mismatched pixel store unpack parameters.

Upvotes: 2

Related Questions