Reputation: 43
My goal is to get instanced rendering working, however even a single glDrawElements
fails right now. Note: This code already works on Windows. However on OS X it fails with GL_INVALID_OPERATION
Basically I load up all the static data into buffers, and then the last buffer contains dynamic data which I reload before every draw. I then call glDrawElementsInstanced (or for debugging glDrawElements
) which promptly fails. I know because there's an error print before, and after that call, and it always prints out an OpenGL error. (Even with glDrawElements
) This error does not appear if I use glDrawArrays
instead
Please see the comments in the code for some additional information. Any help is extremely appreciated.
//Setup code, at this point vertices,textureCoordiantes,normals are all populated
//Allocate the space for the gpu buffers now
//and send the static data
//Rebind the array to bring them into the current context
glBindVertexArray ( vertexArray );
//Push voxel to gpu
glBindBuffer ( GL_ARRAY_BUFFER, vertexBuffer );
glBufferData ( GL_ARRAY_BUFFER, 36*sizeof(vec3), vertices, GL_STATIC_READ );
glEnableVertexAttribArray ( shader->AttributeVertex() );
glVertexAttribPointer ( shader->AttributeVertex(), 3, GL_FLOAT, GL_FALSE, 0, 0 );
glBindBuffer ( GL_ARRAY_BUFFER, textureBuffer );
glBufferData ( GL_ARRAY_BUFFER, 36*sizeof(vec2), textureCoordinates, GL_STATIC_READ );
glEnableVertexAttribArray ( shader->AttributeTexture() );
glVertexAttribPointer ( shader->AttributeTexture(), 2, GL_FLOAT, GL_FALSE, 0, 0 );
glBindBuffer ( GL_ARRAY_BUFFER, normalBuffer );
glBufferData ( GL_ARRAY_BUFFER, 36*sizeof(vec3), normals, GL_STATIC_READ );
glEnableVertexAttribArray ( shader->AttributeNormal() );
glVertexAttribPointer ( shader->AttributeNormal(), 3, GL_FLOAT, GL_FALSE, 0, 0 );
//Allocate space for positions
glBindBuffer ( GL_ARRAY_BUFFER, positionBuffer );
glBufferData ( GL_ARRAY_BUFFER, INSTANCE_RENDER_SWEEP*sizeof(vec4), positions, GL_DYNAMIC_READ );
glEnableVertexAttribArray ( shader->AttributePosition() );
glVertexAttribPointer ( shader->AttributePosition(), 4, GL_FLOAT, GL_FALSE, 0, 0 );
//This code runs a bit later, but runs over and over:
//indices is a vector<GLuint> of length 36 and is just 0-35
glBindVertexArray ( vertexArray );
glBindBuffer ( GL_ARRAY_BUFFER, positionBuffer );
glBufferSubData ( GL_ARRAY_BUFFER, 0,INSTANCE_RENDER_SWEEP*sizeof(vec4), positions );
glEnableVertexAttribArray ( shader->AttributePosition() );
glVertexAttribPointer ( shader->AttributePosition(), 4, GL_FLOAT, GL_FALSE, 0, 0 );
//The position is per-instance
//everything else is per-vertex
glVertexAttribDivisor(shader->AttributeNormal(),0);
glVertexAttribDivisor(shader->AttributePosition(),1);
glVertexAttribDivisor(shader->AttributeTexture(),0);
glVertexAttribDivisor(shader->AttributeVertex(),0);
cout << "1Err: " << glGetError() << "\n";
glDrawDelements(GL_TRIANGLES,36,GL_UNSIGNED_BYTE,&indices[0]);
//glDrawElementsInstanced(GL_TRIANGLES, 36, GL_UNSIGNED_BYTE, &indices[0], bufferedVoxels);
//This next error prints out 1282 which is GL_INVALID_OPERATION
//However if i replace the above with glDrawArrays, it works for one instance (no error)
cout << "2Err: " << glGetError() << "\n";
//All buffered voxels now drawn
bufferedVoxels = 0;
Upvotes: 2
Views: 3816
Reputation: 336
With a GL core context, you cannot pass a client-side array for the indices
parameter of glDrawElements
or glDrawElementsInstanced
. In both cases, you need to create an index buffer and storing your indices in this buffer. The indices
parameter of the draw call then becomes the offset (in bytes) into the bound index buffer from which to read the indices.
However, seeing as your indices array is simply 0 - 35, why not use glDrawArrays
or glDrawArraysInstanced
instead?
Upvotes: 1
Reputation: 1382
I'm not too experienced with openGL, but are you using glew? If so, did you try setting glewExperimental = GL_TRUE
before glewInit()
?
Upvotes: 0