CheerThe2nd
CheerThe2nd

Reputation: 9

Issue with OpenGL Texture Creation for 1066x1600 Resolution

I am trying to load image data using stbi_load into an OpenGL texture object. I've successfully done this in multiple projects, but in my current approach, I’m encountering an access violation error (Access violation reading location) when creating a texture.

From what I understand, this error usually means that OpenGL is reading past the allocated texture memory, possibly because the provided texture data size is incorrect.

This issue only happens for the resolution 1066×1600. I noticed that modifying the width by -1 or +2 prevents the error, but changing the height has no effect, which suggests that the width is the problem.

What I’ve Tried

I am using OpenGL 3.3, and according to the documentation:

All implementations support texture images that are at least 1024 texels wide.

So, 1066 should be a valid width, yet it still causes an issue.

To rule out stbi_load as the cause, I tried manually creating a texture with dummy data:

Texture::Texture(const glm::ivec2& texSize, const void* optData)
{
    m_size = texSize;

    glGenTextures(1, &ID);
    glBindTexture(GL_TEXTURE_2D, ID);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, m_size.x, m_size.y, 0, GL_RGB, GL_UNSIGNED_BYTE, nullptr);

    size_t dataSize = m_size.x * m_size.y * 3;  // width * height * 3 (for RGB)
    unsigned char* data = new unsigned char[dataSize];
    memset(data, 0, dataSize);

    glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_size.x, m_size.y, GL_RGB, GL_UNSIGNED_BYTE, data);
    delete[] data;

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
    glBindTexture(GL_TEXTURE_2D, 0);

    if (CCJFB_Utils::CheckGLerror(__FUNCTION__))
        throw std::runtime_error("Error while allocating texture memory");
}

Even with manually allocated dummy data, the error still occurs for 1066×1600.

The only way to avoid the error is to allocate width × height × 4 bytes instead of width × height × 3, as if the texture requires an extra byte per pixel. However, the texture format is GL_RGB8, which should expect 3 bytes per pixel (not 4).

My Question

My enviroment:

Upvotes: -1

Views: 54

Answers (1)

zerocukor287
zerocukor287

Reputation: 1073

Take a look at glPixelStorei (default values table seen below) -

pname Initial Value Valid rage
GL_PACK_ALIGNMENT 4 1, 2, 4, or 8
GL_UNPACK_ALIGNMENT 4 1, 2, 4, or 8

That means, if it is not set otherwise, OpenGL expects rows to be 4 bytes aligned.

From your research, 1066 causes problem (because 1066 * 3 = 3,198 is not 4 bytes aligned). However, if you change the texture format to 4 bytes width, it will automatically align your rows to 4 bytes.

I noticed that modifying the width by -1 or +2 prevents the error

Good find, if you use 1068 * 3 bytes as a row, it will be 4 bytes aligned.

Overall, as your texture is 3 bytes, the easiest solution would be to use glPixelStorei(GL_UNPACK_ALIGNMENT, 1); call before any glTexImage2D call, as it would work for every parameter.

Or, depending on the input size, set it to 1, 2, 4 or 8 as written in the documentation, and storing back to 4 at the end of your method.

Texture::Texture(const glm::ivec2& texSize, const void* optData)
{
    m_size = texSize;

    glGenTextures(1, &ID);
    glBindTexture(GL_TEXTURE_2D, ID);
    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);   // insert before any glTexImage2D call
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, m_size.x, m_size.y, 0, GL_RGB, GL_UNSIGNED_BYTE, nullptr);
    //...
    // same as your code
}

Upvotes: 0

Related Questions