Reputation: 50
I'm trying to render some custom 3D object using the basic XCode OpenGL ES 2.0 template. I've been reading Ray Wenderlich's blog on OpenGL ES 2.0 to try to get started. No matter what I do my simple "sphere" looks like this:
I believe my issue is with setting up my buffers like the following:
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 8, BUFFER_OFFSET(6));
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 8, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 8, BUFFER_OFFSET(3));
I have Vertex data from an external program that looks like this:
// The vertex data is saved in the following format:
// u0,v0,normalx0,normaly0,normalz0,x0,y0,z0
// u1,v1,normalx1,normaly1,normalz1,x1,y1,z1
// u2,v2,normalx2,normaly2,normalz2,x2,y2,z2
// ...
#define Ball_vertexcount 559
#define Ball_polygoncount 960
float Ball_vertex[Ball_vertexcount][8]={
{0.03125, 0.00000, -0.00000, 1.00000, 0.00000, 0.00000, 1.00000, 0.00000},
{0.03125, 0.06250, 0.03806, 0.98079, 0.19132, 0.03806, 0.98079, 0.19134},
{0.00000, 0.06250, -0.00000, 0.98079, 0.19507, 0.00000, 0.98079, 0.19509},
{0.03125, 0.12500, 0.07465, 0.92389, 0.37530, 0.07466, 0.92388, 0.37533},
...
int Ball_index[Ball_polygoncount][3]={
{0, 1, 2},
{1, 3, 4},
{4, 2, 1},
{3, 5, 6},
...
There is an offset define from Apple that looks like this:
#define BUFFER_OFFSET(i) ((char *)NULL + (i))
My setupGl function includes this:
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Ball_vertex), Ball_vertex, GL_STATIC_DRAW);
glGenBuffers(1, &_indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Ball_index), Ball_index, GL_STATIC_DRAW);
// u0,v0,normalx0,normaly0,normalz0,x0,y0,z0
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 8, BUFFER_OFFSET(6));
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 8, BUFFER_OFFSET(0));
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 8, BUFFER_OFFSET(3));
glBindVertexArrayOES(0);
My understanding of the glVertexAttribPointers is the following:
• The first parameter specifies the attribute name to set. We just use the pre- defined constants GLKit set up.
• The second parameter specifies how many values are present for each vertex. If you look back up at the Vertex struct, you'll see that for the position there are three floats (x,y,z) and for the color there are four floats (r,g,b,a).
• The third parameter specifies the type of each value - which is float for both Posi- tion and Color.
• The fourth parameter is always set to false.
• The fifth parameter is the size of the stride, which is a fancy way of saying "the size of the data structure containing the per-vertex data". So we can simply pass in sizeof(Vertex) here to get the compiler to compute it for us.
• The final parameter is the offset within the structure to find this data. We can use the handy offsetof operator to find the offset of a particular field within a struc-ture.
My glkView looks like this:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClearColor(0.65f, 0.65f, 0.65f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Render the object with GLKit
[self.effect prepareToDraw];
glBindVertexArrayOES(_vertexArray);
glDrawElements(GL_TRIANGLES, sizeof(Ball_index)/sizeof(Ball_index[0]), GL_UNSIGNED_SHORT, 0);
}
Can someone shed some light on what I am doing wrong? Thanks!
Upvotes: 1
Views: 2622
Reputation: 187
The fifth parameter, the size of stride, should be in bytes.
8*sizeof(float)
The last parameter, offset, should be the pointer to the specific data, BUFFER_OFFSET should be like this
#define BUFFER_OFFSET(a) a*sizeof(float)
Upvotes: 2