MarkPowell
MarkPowell

Reputation: 16540

iOS: GPUImage Library and VBO

I am making use of Brad Larson's wonderful GPUImage library for image manipulation. So far, it's been great. However, I'm trying to add a filter to allow Mesh Deformation and running into quite a bit of issues. Specifically, I want to have a filter that uses VBO to render the Quad so I can ultimately dynamically change the vertices for the deformation.

The first step of using VBOs is causing a crash.

I created a subclass of GPUImageFilter overriding the - (void)newFrameReadyAtTime:(CMTime)frameTime method to render a quad via VBO. NOTE: I am simply trying to render a single Quad rather than a full mesh, that way I can tackle one issue at a time.

@implementation GPUMeshImageFilter {
    GLuint _positionVBO;
    GLuint _texcoordVBO;
    GLuint _indexVBO;

    BOOL isSetup_;
}

- (void)setupBuffers
{
    static const GLsizeiptr verticesSize = 4 * 2 * sizeof(GLfloat);
    static const GLfloat squareVertices[] = {
        -1.0f, -1.0f,
        1.0f, -1.0f,
        -1.0f,  1.0f,
        1.0f,  1.0f,
    };

    static const GLsizeiptr textureSize = 4 * 2 * sizeof(GLfloat);
    static const GLfloat squareTextureCoordinates[] = {
        0.0f, 0.0f,
        1.0f, 0.0f,
        0.0f,  1.0f,
        1.0f,  1.0f,
    };

    static const GLsizeiptr indexSize = 4 * sizeof(GLushort);
    static const GLushort indices[] = {
      0,1,2,3,  
    };

    glGenBuffers(1, &_indexVBO);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexVBO);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, indexSize, indices, GL_STATIC_DRAW);

    glGenBuffers(1, &_positionVBO);
    glBindBuffer(GL_ARRAY_BUFFER, _positionVBO);
    glBufferData(GL_ARRAY_BUFFER, verticesSize, squareVertices, GL_STATIC_DRAW);

    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2*sizeof(GLfloat), 0);

    glGenBuffers(1, &_texcoordVBO);
    glBindBuffer(GL_ARRAY_BUFFER, _texcoordVBO);
    glBufferData(GL_ARRAY_BUFFER, textureSize, squareTextureCoordinates, GL_DYNAMIC_DRAW);

    glEnableVertexAttribArray(1);
    glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 2*sizeof(GLfloat), 0);

    NSLog(@"Setup complete");
}

- (void)newFrameReadyAtTime:(CMTime)frameTime;
{
    if (!isSetup_) {
        [self setupBuffers];
        isSetup_ = YES;
    }

    if (self.preventRendering)
    {
        return;
    }

    [GPUImageOpenGLESContext useImageProcessingContext];
    [self setFilterFBO];

    [filterProgram use];

    glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
    glClear(GL_COLOR_BUFFER_BIT);

    glActiveTexture(GL_TEXTURE2);
    glBindTexture(GL_TEXTURE_2D, filterSourceTexture);

    glUniform1i(filterInputTextureUniform, 2);  

    if (filterSourceTexture2 != 0)
    {
        glActiveTexture(GL_TEXTURE3);
        glBindTexture(GL_TEXTURE_2D, filterSourceTexture2);

        glUniform1i(filterInputTextureUniform2, 3); 
    }

    NSLog(@"Draw VBO");
    glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_SHORT, 0);

    [self informTargetsAboutNewFrameAtTime:frameTime];
}

@end

Plugging in this filter, I see: "Setup complete" and "Draw VBO" displayed to the console. However, after it calls the target (in this case a GPUImageView) it crashes at the target's drawing call, which uses glDrawArrays.

glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

Here is the complete method that contains this line.

- (void)newFrameReadyAtTime:(CMTime)frameTime;
{
    [GPUImageOpenGLESContext useImageProcessingContext];
    [self setDisplayFramebuffer];

    [displayProgram use];

    glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    static const GLfloat textureCoordinates[] = {
        0.0f, 1.0f,
        1.0f, 1.0f,
        0.0f, 0.0f,
        1.0f, 0.0f,
    };

    glActiveTexture(GL_TEXTURE4);
    glBindTexture(GL_TEXTURE_2D, inputTextureForDisplay);
    glUniform1i(displayInputTextureUniform, 4); 

    glVertexAttribPointer(displayPositionAttribute, 2, GL_FLOAT, 0, 0, imageVertices);
    glVertexAttribPointer(displayTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, textureCoordinates);

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

    [self presentFramebuffer];
}

Any help would be greatly appreciated, I've been banging my head against this for awhile.

Upvotes: 3

Views: 1222

Answers (2)

brett
brett

Reputation: 385

It looks likely that the crash occurs because the GL_ARRAY_BUFFER is still bound when GPUImageView-newFrameReadyAtTime: executes.

Try unbinding the buffer (i.e. binding it to 0) at the end of -setupBuffers:

glBindBuffer(GL_ARRAY_BUFFER, 0);

The reason this is a problem is because GPUImage uses the same OpenGL context from one GPUImageInput (e.g. GPUImageFilter, GPUImageView) to the next. I believe largely in order that each step can output to an OpenGL texture and then have that texture directly available to the next GPUImageInput.

So because GL_ARRAY_BUFFER is still bound the behavior of glVertexAttribPointer inside GPUImageView-newFrameReadyAtTime changes, effectively trying point the displayPositionAttribute attribute to the populated VBO at an offset of imageVertices, which is nonsensical and likely to cause a crash. See the glVertexAttribPointer docs.

Upvotes: 2

Tim
Tim

Reputation: 35933

This code below doesn't look right to me at all. Why are you enabling vertex attrib array 4 & 5? You should enable the array at the attribute location you are intending to use.

//position vbo
glEnableVertexAttribArray(4);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2*sizeof(GLfloat), 0);

//texcoord vbo
glEnableVertexAttribArray(5);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 2*sizeof(GLfloat), 0);

If your vertex attribute is at position 0, you should enable attrib 0 and set pointer for attrib 0. If it's at position 4 (which I doubt), then you should enable attrib 4 and set the pointer for position 4. I can't think of any reason it should be mismatched like you have it.

You should either get the proper position by either setting it via a layout attribute, using glBindAttribLocation before shader linking, or using glGetAttribLocation after linking.

Let me know if this doesn't make sense.

Upvotes: 0

Related Questions