Thomas
Thomas

Reputation: 6196

Why are these signed bytes being read as unsigned bytes with LWJGL?

When you upload a ByteBuffer (java lang object) which stores signed bytes, with the LJWGL function, glBufferData(), it turns out the correct way for openGL to interpret the data on the corresponding buffer is with GL_UNSIGNED_BYTE.

Why is this? LWJGL does not seem to be converting the ByteBuffer to some other format, here is the source for the glBUfferData() function.

public static void glBufferData(int target, ByteBuffer data, int usage) {
    ContextCapabilities caps = GLContext.getCapabilities();

    long function_pointer = caps.glBufferData;

    BufferChecks.checkFunctionAddress(function_pointer);
    BufferChecks.checkDirect(data);

    nglBufferData(target, data.remaining(), MemoryUtil.getAddress(data), usage, function_pointer);
}

Any idea why?

Edit:

I see why you guys may think there needs to be no conversion because unsigned bytes and bytes are stored the same way. But let me clarify, I put integral values of 1 2 3 4 5, etc, into this bytebuffer, presumably as signed bytes because that's what java handles. So these bytes are storing 12345 when using a signed interpretation, presumably. So why does openGL read 12345 with a unsigned interpretation instead of the signed interpretation, is the question.

note that the significance of the data is an index buffer.

Upvotes: 1

Views: 638

Answers (2)

Andon M. Coleman
Andon M. Coleman

Reputation: 43329

To begin with, do not use GL_UNSIGNED_BYTE for vertex buffer indices. OpenGL supports this at the API level, but desktop GPU hardware manufactured in the past ~14 years generally do not support it at the hardware level. The driver will convert the indices to 16-bit in order to satisfy hardware constraints, so all you are actually doing is increasing the work-load on your driver. GL_UNSIGNED_SHORT is really the smallest index size you should use if you do not want to unnecessarily burden your driver. What it boils down to is unaligned memory access, you can use 8-bit indices if you want, but you will get better vertex performance if you use 16/32-bit instead.

To address the actual issue in this question, you are using GL_UNSIGNED_BYTE to interpret the vertex indices and in this case the range of the data type is irrelevant for values < 128. GL_UNSIGNED_BYTE vs. GL_SIGNED_BYTE really only matters for interpreting color values, because GL does fixed-point scaling in order to re-map the values from [-128, 127] -> [-1.0, 1.0] (signed) or [0, 255] -> [0.0, 1.0] (unsigned) for internal representation.

In the case of a vertex index, however, the number 5 is still 5 after it is converted from unsigned to signed or the other way around. There is no fixed-point to floating-point conversion necessary to interpret vertex indices, and so the range of values is not particularly important (assuming no overflow).

To that end, you have no choice in the matter when using vertex indices. The only valid enums are GL_UNSIGNED_BYTE, GL_UNSIGNED_SHORT and GL_UNSIGNED_INT. If your language cannot represent unsigned values, then the language binding for OpenGL will be responsible for figuring out exactly what these enums mean and how to handle them.

Upvotes: 2

Joni
Joni

Reputation: 111279

The main difference between signed and unsigned bytes is how you interpret the bits: negative values have the same bit patterns as values over 127. You don't need different types of storage for the two, and the conversion (which is really a no-op) works automatically using the two's complement system.

Upvotes: 0

Related Questions