Reputation:
I'm confused about VBO's and modern openGL. There is a direct question in in the end of this post, then there is a bundle of them on the way there. If you've got any insigt into any of this, I'd be greatful for a reply. And if you reply, please regard me as a compleate idiot, with no knowledge whatsoever.
So, my history is this:
I've got a game, which is a top down 2d game. I've used immideate mode to render 2d sprites. The actual texture coordinates of my texture atlases were static and predefined in a seperate class. The quad coordinates were defined in each entity and updated as the game progressed. When I rendered, I simply bound a specific texture, called glBegin(triangles), then called each visable objects render method. This in turn sent both quad coordinates and texure coordinates to my Renderer class, which made the openGl calls. I then flushed the texture, which only calls glEnd().
I did this for for all different atlases and in the order so that I got the proper depth.
But times change indeed. I want to move to use VBO's and shaders. I've tried several times in the past, but failed misserably. There are simply a few things I can't find on google to give me a complete understanding of it, and how I can use it to speed up my game.
I know the basics. Instead of sending all the information over the bus to the gpu each render call, I can simply store everything I'll need in the initization phase and then use shaders to calculate the end result. But...
I've got an idea for the texture coordinates. These will be static, as they will never change. It would make sence to store them on the GPU. But how do I know which coordinates correspond to each QUAD/TRIANGLE. I'm thinking that instead of four floats, each renderable object in the game can have some kind of index, which it passes as an attribute to the vertex shader. the vertex shader uses the index to lookup the four texure coordinates in the VBO. Is this a feasable solution? How would you implement something like that?
But as for the quad vertecies I'm lost. These will move arround constantly. They will be visable, then dissapear, etc. That means that my quad VBO will change at each render call and the code I've seen that updates a VBO is quite ugly. I've seen something like:
Looks quite expensive to me. And I don't understand how I can delete a certain entry (if an entity moves out of screen, etc.), nor how I can manipulate a certain entry (an entity moves). And if I have to update the VBO in this manner each render call, what's the performance gain? Looks more like a loss to me...
Also, how can I keep track of the "depth" of the resulted image. I'm doing 2d, but with "depth" i mean the order of rendering, e.g. making sure object2 is rendered ontop of object1. A different VBO for each depth perhaps? Or should I use the z-coordinate for this and enbale depth stuff. Will the latter not give a performense hit?
Also there's the 2d factor. I have the utmost respect for 3d, but I want to use 2d and take advantage of the fact that it should in theory yield better performance. However, from what I've gathered, this doesn't seem to be the case. In opengl 3+ It seemes that in order for me to render 2d stuff, I need to translate it to 3d first, since that's what's processes in the harware. Seems odd to me since the end result on screen is 2d. Is there a way to circumvent this, and save the GPU the work of 2d -> 3d -> 2d?
In other words, how can I efficienty change this:
class main{
void main(){
while(true){
Renderer.bind();
//call render in all gameObjects
Renderer.flush();
}
}
}
class GameObject{
private float X1, X2, Y1, Y2;
private TexureCoordinate tex;
render(float dt){
//update X1, X2...
Renderer.render(tex.getX1(), tex.getX2()... X1, X2 ...);
}
}
class Renderer{
//called once
void bind(Texture texture){
texture.bind();
glBegin(GL_TRIANGLES)
}
//called "nr of visable objects" times
void render(texX1, texX2, texY1, texY2, quadX1, quadX2, quadY1, quadY2){
glTexCoo2d(texX1, texY1)
....
etc.
....
}
void flush(){
glEnd();
}
}
Into something that uses modern openGl?
Upvotes: 0
Views: 645
Reputation: 162164
The first and foremost important key insight is, that vertices are not just position. A vertex is the whole tuple of attributes used to preset in immediate mode drawing calls before calling glVertex. If you change only one of the attributes you end up with a very different vertex.
Let's step back from VBOs for a moment to get the whole glBuffer[Sub]Data out of the way and look at plain old client side vertex arrays (around for about as long as immediate mode).
Say you've got two arrays of position, which have exactly the same layout, but with different values:
GLfloat quad_pos_a[2][4] = {
{1,2}, {2,2}, {2,3}, {1,3}
};
GLfloat quad_pos_b[2][4] = {
{5,5}, {10,5}, {10,20}, {5,20}
};
Other than their values their layout is identical: Four 2-element attributes in succession. This trivially allows to use common texture coordinate array, matching the layout of those two quads:
GLfloat quad_texc[2][4] = {
{0,0},{1,0},{1,1},{0,1}
};
I think it should be obvious for you, how to use the immediate mode calls to draw quad_pos_a
and quad_pos_b
sharing quad_texc
. If it's not obvious, now's the time to work it out. This answer is patient and will wait until you're done…
INTERMISSION
… since putting geometry data into arrays is such an obvious thing of a no-brainer, OpenGL pretty soon introduces a concept called vertex arrays: You could tell OpenGL where to get the vertex data from and then just tell it either how many vertices there are to draw, or which vertices to cherry pick from the arrays given a list of indices.
Using VAs looks like this:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(
2 /* = number of elements per attribute */,
GL_FLOAT /* type of attribute elements */,
0 /* = the byte distance between attributes OR zero if tightly packed */,
quad_pos_a );
glTexCoordPointer(
2 /* = number of elements per attribute */,
GL_FLOAT /* type of attribute elements */,
0 /* = the byte distance between attributes OR zero if tightly packed */,
quad_texc );
glDrawArrays(
GL_QUADS /* what to draw */,
0 /* which index to start with */,
4 /* how many vertices to process*/ );
or if you just want to draw a triangle of the 0th, 1st and 3rd vertex:
GLushort indices[] = {0,1,3};
glDrawElements(
GL_TRIANGLES /* what */,
3 /* how many */,
GL_UNSIGNED_SHORT /* type of index elements */,
indices );
Now the key difference between plain old vertex arrays and VBOs is, that VBOs place the data in OpenGL's custody – that's all about it. If you've understood VAs you understoof VBOs. However unlike VAs you can't change VBOs' contents as effortless. The difference with shaders is, that no longer the kind of attribute is predefined. Instead there are generic vertex attributes, set with glEnableVertexAttribArray
(instead of glEnableClientState
) and glVertexAttribPointer
.
So how to save the overhead of uploading the updated data? Well, it depends on what you consider expensive: The data has to go the GPU eventually. So packing it up into a coalesced buffer data upload transfer is probably beneficial, since it saves the per-call overhead to chunk up each glVertex call.
Upvotes: 2