Raz0r
Raz0r

Reputation: 73

Crash when writing to an index buffer

I'm currently writing an engine using C++11/SDL2/OpenGL for Windows, Mac and Linux.

It runs fine on Mac and Linux, but I'm getting a nasty crash on my Windows+Nvidia desktop (The only other Windows environment I have is a virtual machine, which doesn't support my OpenGL feature set)

I've had two friends test it on different Windows+AMD devices, so my issue seems to be related to Nvidia's drivers and the current state I have them in, meaning an SSCCE probably will not help.
Vertex buffers are created fine, and the creation of the following index buffer used to work fine at some unknown point in time. Perhaps before driver updates...

For reference, my Buffer class is as follows:

static GLenum GetGLBufferType( BufferType bufferType ) {
    switch ( bufferType ) {
    case BufferType::Vertex: {
        return GL_ARRAY_BUFFER;
    } break;

    case BufferType::Index: {
        return GL_ELEMENT_ARRAY_BUFFER;
    } break;

    case BufferType::Uniform: {
        return GL_UNIFORM_BUFFER;
    } break;

    default: {
        return GL_NONE;
    } break;
    }
}

GLuint Buffer::GetID( void ) const {
    return id;
}

Buffer::Buffer( BufferType bufferType, const void *data, size_t size )
: type( GetGLBufferType( bufferType ) ), offset( 0 ), size( size )
{
    glGenBuffers( 1, &id );
    glBindBuffer( type, id );
    glBufferData( type, size, data, GL_STREAM_DRAW );

    if ( bufferType == BufferType::Uniform ) {
        glGetIntegerv( GL_UNIFORM_BUFFER_OFFSET_ALIGNMENT, reinterpret_cast<GLint *>( &alignment ) );
    }
    else {
        alignment = 16;
    }
}

Buffer::~Buffer() {
    glDeleteBuffers( 1, &id );
}

void *Buffer::Map( void ) {
    Bind();
    return glMapBufferRange( type, 0, size, GL_MAP_WRITE_BIT );
}

BufferMemory Buffer::MapDiscard( size_t allocSize ) {
    Bind();

    allocSize = (allocSize + alignment - 1) & ~(alignment - 1);
    if ( (offset + allocSize) > size ) {
        // We've run out of memory. Orphan the buffer and allocate some more memory
        glBufferData( type, size, nullptr, GL_STREAM_DRAW );
        offset = 0;
    }

    BufferMemory result;
    result.devicePtr = glMapBufferRange(
        type,
        offset,
        allocSize,
        GL_MAP_WRITE_BIT | GL_MAP_UNSYNCHRONIZED_BIT | GL_MAP_INVALIDATE_RANGE_BIT
    );
    result.offset = offset;
    result.size = allocSize;
    offset += allocSize;
    return result;
}

void Buffer::Unmap( void ) {
    glUnmapBuffer( type );
}

void Buffer::BindRange( int index, size_t rangeOffset, size_t rangeSize ) const {
    if ( !rangeSize ) {
        rangeSize = size - rangeOffset;
    }

    glBindBufferRange( type, index, id, rangeOffset, rangeSize );
}

void Buffer::Bind( void ) const {
    glBindBuffer( type, id );
}

The code to create my index buffer looks like:

static const uint16_t quadIndices[6] = { 0, 2, 1, 1, 2, 3 };
quadsIndexBuffer = new Buffer( BufferType::Index, quadIndices, sizeof(quadIndices) );

The crash occurs on glBufferData( type, size, data, GL_STREAM_DRAW );
id is 4
type is 34963 aka GL_ELEMENT_ARRAY_BUFFER
size is 12
data is quadIndices

If I try to create the index buffer without filling it, then mapping it and writing to it like so:

quadsIndexBuffer = new Buffer( BufferType::Index, nullptr, sizeof(quadIndices) );
BufferMemory bufferMem = quadsIndexBuffer->MapDiscard( 6 * sizeof(uint16_t) );
uint16_t *indexBuffer = static_cast<uint16_t *>( bufferMem.devicePtr );
for ( size_t i = 0u; i < 6; i++ ) {
    *indexBuffer++ = quadIndices[i];
}
quadsIndexBuffer->Unmap();

then the crash occurs on glMapBufferRange inside Buffer::MapDiscard

The rationale behind that mapping method is because trying to map a buffer that is busy can introduce busy-waits.

// Usage strategy is map-discard. In other words, keep appending to the buffer
// until we run out of memory. At this point, orphan the buffer by re-allocating
// a buffer of the same size and access bits.

I've tried searching around for answers, but the only solutions I've found are related to passing incorrect sizes, or wrong order of arguments for glBufferData. Not helpful.

Upvotes: 1

Views: 377

Answers (1)

Raz0r
Raz0r

Reputation: 73

It seems that by disabling GL_DEBUG_OUTPUT_SYNCHRONOUS_ARB the crash no longer manifests itself and my program's behaviour is correct.

I guess I was right in assuming it's a driver bug. I'll try forwarding it onto the dev team.

For reference, this is targeting OpenGL 3.1 on an Nvidia GTX 680 driver version 350.12
glewExperimental is enabled, and the following OpenGL context flags are set: core profile, forward compatible, debug

Upvotes: 1

Related Questions