Reputation: 59
OpenGL specification says that all types has fixed bitdepths. Also as I know c++ specification doesn't regulate bitdepth. There is only thing that we know: c++ types like int32_t, int16_t,... contains need number of bytes, but not bits. How we can safely use these types? And how can we be sure that say OpenGL type "unsigned integer" will match with uint32_t at binary representation level?
Upvotes: 1
Views: 1487
Reputation: 473627
There is only thing that we know: c++ types like int32_t, int16_t,... contains need number of bytes, but not bits.
That's not true at all. The C standard, which the C++ standard imports, states:
The typedef name intN_t designates a signed integer type with width N, no padding bits, ...
N being the number of bits, not the number of bytes.
The OpenGL standard similarly defines its types with an exact number of bits, not bytes. Neither one allows padding.
Therefore, GLuint
must be identically sized and formatted relative to uint32_t
. They need not be the exact same type, but since they store the same range of values and have the same size, conversion between them ought to be lossless.
So it's not clear what you're concerned about.
Upvotes: 2