Phildo
Phildo

Reputation: 1066

glTexImage2D error subtleties between iOS, android- inconsistent documentation

So I have this line of code:

glTexImage2D(GL_TEXTURE_2D,0,GL_DEPTH_COMPONENT,shadow_tex_dim.x,shadow_tex_dim.y,0,GL_DEPTH_COMPONENT,GL_FLOAT,shadow_texture_data);

which works fine to establish a depth texture on android (running OpenGLES2) (and OSX).

When I run it in iOS (iOS 10, also running OpenGLES2), glGetError() returns GL_INVALID_OPERATION. (glGetError() right before this line returns clean).

Here's the docs for glTexImage2D: http://docs.gl/es2/glTexImage2D

Notice that 'internalformat' specifies that the only valid args are "GL_ALPHA, GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_RGB, GL_RGBA", but down in the "examples" section, it shows glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT, fbo_width, fbo_height, 0,GL_DEPTH_COMPONENT,GL_UNSIGNED_BYTE, NULL); (which is very similar to my current line, but with GL_UNSIGNED_BYTE rather than GL_FLOAT.

So, am I allowed to use GL_DEPTH_COMPONENT? Why does this work on android's OpenGLES2, and not iOS's? Where did I get that I should be using GL_FLOAT (note that the behavior doesn't seem to change on either iOS or android either way...)?

Upvotes: 0

Views: 388

Answers (1)

Mobile Ben
Mobile Ben

Reputation: 7341

Apple support for depth textures would be defined here: https://www.khronos.org/registry/gles/extensions/OES/OES_depth_texture.txt

From the documentation there are 2 areas which are relevant:

Textures with and values of DEPTH_COMPONENT refer to a texture that contains depth component data. is used to determine the number of bits used to specify depth texel values.

A value of UNSIGNED_SHORT refers to a 16-bit depth value. A value of UNSIGNED_INT refers to a 32-bit depth value.

and

The error INVALID_OPERATION is generated if the and is DEPTH_COMPONENT and is not UNSIGNED_SHORT, or UNSIGNED_INT.

This is also interesting: https://www.opengl.org/wiki/Common_Mistakes

In OpenGL, all depth values lie in the range [0, 1]. The integer normalization process simply converts this floating-point range into integer values of the appropriate precision. It is the integer value that is stored in the depth buffer.

Typically, 24-bit depth buffers will pad each depth value out to 32-bits, so 8-bits per pixel will go unused. However, if you ask for an 8-bit Stencil Buffer along with the depth buffer, the two separate images will generally be combined into a single depth/stencil image. 24-bits will be used for depth, and the remaining 8-bits for stencil.

Now that the misconception about depth buffers being floating point is resolved, what is wrong with this call?

glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, my pixels);

Because the depth format is a normalized integer format, the driver will have to use the CPU to convert the normalized integer data into floating-point values. This is slow.

It would appear that Android supports GL_FLOAT types for depth texture.

Upvotes: 1

Related Questions