Bram
Bram

Reputation: 8283

Why is OPENGL missing 16-bit RGB format that is unsigned?

I have been using a 16-bit monochrome texture using the GL_R16 format. This works just fine. I am now in need of a 3-channel 16-bit texture.

When examining the manpage glTexStorage2D though, I see that there seems to be a gap in functionality.

One channel is available: GL_R16

Two channel is available: GL_RG16

Four channel is available: GL_RGBA16

But three channels, only comes in SNORM (signed normalized) flavour: GL_RGB16_SNORM.

What happened to GL_RGB16? Adding a 4th channel seems wasteful, so I would like to avoid that. I also want to avoid dealing with -1..1 samples, as my data is unsigned.

Upvotes: 1

Views: 1210

Answers (1)

derhass
derhass

Reputation: 45342

What happened to GL_RGB16?

It's fine. It is an allowed and for textures also a required to be supported format (in contrast to when using such a texture as a framebuffer attachment for render-to-texture), as per table 8.12 in the OpenGL 4.6 core profile spec.

Those OpenGL "reference pages" are unfortunately notoriously incomplete, outdated, often misleading and sometimes even flat-out wrong. The only reliable source for documentation is the OpenGL specification.

Adding a 4th channel seems wasteful, so I would like to avoid that

What hardware does with such a format is another story entirely. For example, any real-world GPU will internally pad GL_RGB8 to RGBA, because they won't like 3-byte aligned texels. I wouldn't be surprised if they didn't like 6 byte alingments for GL_RGB16 either.

Upvotes: 5

Related Questions