Reputation: 2625
In OpenGL, vertices are specified between -1.0 and 1.0 range in NDC and then are mapped to the actual screen. But isn't it possible that with very large screen resolution it becomes impossible to specify the exact pixel location on a screen with this limited floating point value range?
So, mathematically, how large should be the screen resolution to that happen?
Upvotes: 2
Views: 1850
Reputation: 43329
They are fixed-point with a few bits reserved for subpixel accuracy (you absolutely need this since pixel coverage for things like triangles is based on distance from pixel center).
The amount of subpixel precision you are afforded really depends on the value of GL_MAX_VIEWPORT_DIMS
. But if GL_MAX_VIEWPORT_DIMS
did not exist, then for sure, it would make sense to use floating-point pixel coordinates since you would want to support a massive (potentially unknown) range of coordinates.
In the minimum OpenGL implementation, there must be 4-bits of sub-pixel precision (GL_SUBPIXEL_BITS
), so if your GPU used 16-bit for raster coordinates that would give you 12-bit (integer) + 4-bit (fractional) to spread across GL_MAX_VIEWPORT_DIMS
(the value would probably be 4096 for 12.4 fixed-point). Such an implementation would limit the integer coordinates to the range [0,4095] and would divide each of those integer coordinates into 16 sub-pixel positions.
Upvotes: 2
Reputation: 54602
A standard (IEEE 754) 32-bit float has 24 bits of precision in the mantissa. 23 bits are stored, plus an implicit leading 1. Since we're looking at a range of -1.0 to 1.0 here, we can also include the sign bit when estimating the precision. So that gives 25 bits of precision.
25 bits of precision is enough to cover 2^25 values. 2^25 = 33,554,432. So with float precision, we could handle a resolution of about 33,554,432 x 33,554,432 pixels. I think we're safe for a while!
Upvotes: 3