markmnl
markmnl

Reputation: 11426

Why does OpenGL use 4 floats to define colour normally?

I am learning to use OpenGL and am surprised colour seems to be always defined as a vec4 struct comprising of 4 single precision floating point numbers. I am surprised because if the size of memory to be copied to the graphics memory is a concern (thats why indices exist with vertex shaders) the size of each colour is:

4 x 32 = 128 bits or 16 bytes

when all that is needed to define a 32 bit colour is 32 bits (4 bytes)! Why not define ARGB colour channels as bytes instead?

Upvotes: 4

Views: 2683

Answers (1)

geometrian
geometrian

Reputation: 15387

colour seems to be always defined as a vec4 struct comprising of 4 single precision floating point numbers

First: it's not. When you load a texture, try using one of the sized formats as the internal format. For example, GL_RGBA8 is a normalized format with 8-bits per-channel. GL_RGBA16F is a 16-bit floating-point format per-channel. There are lots.

The variety is provided for memory reasons, as you say, but also for algorithmic choices. E.g. you can use texture views to read a single-channel integer texture as a 4-byte four-channel texture. As another example, for single-pass depth peeling using 64-bit atomics, you can view a floating-point depth buffer as an atomic integer buffer.


As for vertex data, you again have your choice. The API provides different strides and sizes into your data. Internally, though, the format will almost always be converted into floating-point just before the shader runs.

Why? Because all of the data is processed by a floating point processor anyway. Everything you do--blending, texture modulation, BRDF modulation, etc.--is done in floating point. This is fine; rendering is usually bound by fragment shader texture accesses, so floating-point compute is comparatively free.

Upvotes: 7

Related Questions