Reputation: 23
According to the opengl documentation when you call glBufferData the size parameter needs to be in bytes.
However this is a snippet from a program (that works perfectly fine)
long vertBsize = ((short)5) * 4l * 4l;
glBindBuffer ( GL.GL_ARRAY_BUFFER, vbuffer);
glBufferData ( GL.GL_ARRAY_BUFFER, vertBsize , points, GL.GL_STATIC_DRAW);
According to the java documentation a short is 16-bits so in my mind this code snippet is incorrectly sending the number of BITS in the points buffer.
Am I stupid/way too tired or am I missing something
Upvotes: 0
Views: 35
Reputation: 23
@Reto Koradi's - question for more detail ultimately led me to the answer.
The points buffer contains all floats. The floats are split into sets of four:
{1.0,2.0,4.0,1.0,
2.0,3.0,8.0,1.0,
3.0,4.0,2.0,1.0,
2.0,3.0,8.0,1.0,
1.0,2.0,4.0,1.0}
Since a float == 32 bits == 4bytes. The formula 5(sets of floats)* 4(floats per set) * 4(bytes of data per float) does indeed come out to the correct amount of bytes.
Upvotes: 1