lilinjn
lilinjn

Reputation: 355

Meaning and Implications of InternalFormat, Format, and Type parameter for WebGL Textures

In WebGL calls to texSubImage2D and readPixels require a Format and Type parameters. In addition texSubImage2D requires an InternalFormat parameter. While it is easy to find documentation about which combinations of these parameters are valid, it is unclear exactly what these parameters mean and how to go about using them efficiently, particularly given that some internal formats can be paired with multiple types e.g. R16F/HALF_FLOAT vs R16F/FLOAT or GL_R11F_G11F_B10F/FLOAT vs GL_R11F_G11F_B10F/GL_UNSIGNED_INT_10F_11F_11F_REV (where the notation I am using is InternalFormat/Type)

Also both of these API calls can be used in combination with a pixels parameter that can be a TypedArray -- it this case it is unclear which choices of TypedArray are valid for a given InternalFormat/Format/Type combo (and which choice is optimal in terms of avoiding casting)

For instance, is it true that the internal memory used by the GPU per texel is determined solely by the InternalFormat -- either in an implementation dependent way (e.g. WebGL1 unsized formats) or, for some newly added InternalFormats in WegGL2, a fully specified way.

Are the Format and Type parameters related primarily to how data is marshalled into and out of ArrayBuffers? For instance, if I use GL_R11F_G11F_B10F/GL_UNSIGNED_INT_10F_11F_11F_REV does this mean I should be passing texSubImage2D an Uint32Array with each element of the array having its bits carefully twiddled in javascript whereas if I use GL_R11F_G11F_B10F/Float then I should use a Float32Array with three times number of elements as the prior case, and WebGL will handle the bit twiddling for me? Does WebGL try to check that the TypedArray I have passed is consistent with the Format/Type I have chosen or does it operate directly on the underlying ArrayBuffer? Could I have used a Float64Array in the last instance? And what to do about HALF_FLOAT?

Upvotes: 2

Views: 696

Answers (1)

lilinjn
lilinjn

Reputation: 355

It looks like bulk of the question can be answered by referring to section 3.7.6 Texture Objects of the WebGL2 Spec. In particular the info in the table found in the documentation for texImage2D which clarifies which TypedArray is required for each Type:

TypedArray  WebGL Type 
----------  ----------
Int8Array   BYTE
Uint8Array  UNSIGNED_BYTE
Int16Array  SHORT
Uint16Array UNSIGNED_SHORT
Uint16Array UNSIGNED_SHORT_5_6_5
Uint16Array UNSIGNED_SHORT_5_5_5_1
Uint16Array UNSIGNED_SHORT_4_4_4_4
Int32Array  INT
Uint32Array UNSIGNED_INT
Uint32Array UNSIGNED_INT_5_9_9_9_REV
Uint32Array UNSIGNED_INT_2_10_10_10_REV
Uint32Array UNSIGNED_INT_10F_11F_11F_REV
Uint32Array UNSIGNED_INT_24_8
Uint16Array HALF_FLOAT
Float32Array    FLOAT

My guess is that

  • InternalFormat determines how much GPU memory is used to store the texture
  • Format and Type governs how data is marshalled into/ out of the texture and javascript.
  • Type determines what type of TypedArray must be used
  • Format plus the pixelStorei parameters (section 6.10) determine how many elements the TypedArray will need and which elements will actually by used (will things be tightly packed, will some rows be padded etc)

Todo: Workout details for

  • encoding/decoding some of the more obscure Type values to and from javascript.
  • calculating typed array size requirement and stride info given Type, Format, and pixelStorei parameters

Upvotes: 1

Related Questions