Reputation: 45
So in my program I have a number of textures that I am trying to display. Earlier in my code I generate the VAO for the textures and the ibo (or index buffer) for each texture. But when I run my code it crashes at the glDrawElements()
call and in nvoglv32.dll. I've read around that a bug in the nvidia driver might be causing it but I doubt it. Something is probably wrong when I generate or bind the VAO or ibo but I have no idea where. Here's the section of code where the error happens:
for (int i = 0; i < NUM_TEXTURES; i++){
glBindVertexArray(VAO_T[i]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo[i]);
glBindTexture(GL_TEXTURE_2D, texture[i]);
//error here
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, BUFFER_OFFSET(0));//error right here
}
This is the error I get when running in debug:
Unhandled exception at 0x0263FE4A in Comp465.exe: 0xC0000005: Access violation reading location 0x00000000.
Heres my code where I generate the VAO, ibo, and textures:
glGenVertexArrays(NUM_TEXTURES, VAO_T);
glGenBuffers(NUM_TEXTURES, VBO_T);
glGenBuffers(NUM_TEXTURES, ibo);
glGenTextures(NUM_TEXTURES, texture);
...
for (int i = 0; i < NUM_TEXTURES; i++){
//Tel GL which VAO we are using
glBindVertexArray(VAO_T[i]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo[i]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices[i]), indices[i], GL_STATIC_DRAW);
//initialize a buffer object
glEnableVertexAttribArray(VBO_T[i]);
glBindBuffer(GL_ARRAY_BUFFER, VBO_T[i]);
glBufferData(GL_ARRAY_BUFFER, sizeof(point[i]) + sizeof(texCoords), NULL, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(point[i]), point[i]);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(point[i]), sizeof(texCoords), texCoords);
GLuint vPosition = glGetAttribLocation(textureShader, "vPosition");
glVertexAttribPointer(vPosition, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
GLuint vTexCoord = glGetAttribLocation(textureShader, "vTexCoord");
glVertexAttribPointer(vTexCoord, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(sizeof(point[i])));
glEnableVertexAttribArray(vTexCoord);
//Get handles for the uniform structures in the texture shader program
VP = glGetUniformLocation(textureShader, "ViewProjection");
//Bind the texture that we want to use
glBindTexture(GL_TEXTURE_2D, texture[i]);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
// set texture parameters
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
//Load texture
texture[i] = loadRawTexture(texture[i], TEX_FILE_NAME[i], PixelSizes[i][0], PixelSizes[i][1]);
if (texture[i] != 0) {
printf("texture loaded \n");
}
else
printf("Error loading texture \n");
}
Upvotes: 0
Views: 2009
Reputation: 54642
This statement certainly looks wrong:
glEnableVertexAttribArray(VBO_T[i]);
glEnableVertexAttribArray()
takes an attribute location as its argument, not a buffer id. You actually use it correctly later:
GLuint vPosition = glGetAttribLocation(textureShader, "vPosition");
...
glEnableVertexAttribArray(vPosition);
GLuint vTexCoord = glGetAttribLocation(textureShader, "vTexCoord");
...
glEnableVertexAttribArray(vTexCoord);
So you should be able to simply delete that extra call with the invalid argument.
Apart from that, I noticed a couple of things that look slightly off, or at least suspicious:
The following call is meaningless if you use the programmable pipeline, which you are based on what's shown in the rest of the code. It can be deleted.
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
This is probably just a naming issue, but textureShader
needs to be a program object, i.e. the return value of a glCreateProgram()
, not a shader object.
While inconclusive without seeing the declaration, I have a bad feeling about this, and a couple other similar calls:
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices[i]), indices[i], GL_STATIC_DRAW);
If indices[i]
is an array, i.e. the declaration looks something like this:
indices[NUM_TEXTURES][INDEX_COUNT];
then this is ok. But if indices[i]
is a pointer, or degenerated to a pointer when it was passed as a function argument, sizeof(indices[i])
will be the size of a pointer. You may want to double check that it gives the actual size of the index array. Same thing for other similar cases.
Upvotes: 3