aslg
aslg

Reputation: 1974

Open GL - Why are Pixel values in ABGR and how to use them in RGBA

I have some code for loading textures where I'm using DevIL to load the images and then OpenGL creates a texture from the pixels. This code works fine and the texture shows up properly and that's all fine.

Besides that I can also make an array from within the program to create the texture or make changes in the texture's pixels directly. My problem is here: when handling the pixels their format seems to be ABGR rather than RGBA as I would have liked.

I stumbled upon this SO question that refers to the format that's passed in the glTexImage2D function:

(...) If you have GL_RGBA and GL_UNSIGNED_INT_8_8_8_8, that means that pixels are stored in 32-bit integers, and the colors are in the logical order RGBA in such an integer, e.g. the red is in the high-order byte and the alpha is in the low-order byte. But if the machine is little-endian (as with Intel CPUs), it follows that the actual order in memory is ABGR. Whereas, GL_RGBA with GL_UNSIGNED_BYTE will store the bytes in RGBA order regardless whether the computer is little-endian or big-endian. (...)

Indeed I have an Intel CPU. The images are loaded just fine the way things are right now and I actually use the GL_RGBA mode and GL_UNSIGNED_BYTE type.

GLuint makeTexture( const GLuint* pixels, GLuint width, GLuint height ) {

    GLuint texture = 0;

    glGenTextures( 1, &texture );

    glBindTexture( GL_TEXTURE_2D, texture );

    glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels );

    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );

    glBindTexture( GL_TEXTURE_2D, NULL );

    GLenum error = glGetError();
    if ( error != GL_NO_ERROR ) {
        return 0;
    }

    return texture;
}

This function is used in my two methods for loading textures, the method that loads an image from a file and the one that creates a texture from an array.

Let's say that I want to create an array of pixels and create a texture,

GLuint pixels[ 128 * 128 ];
for ( int i = 0; i < 128 * 128; ++i ) {
    pixels[ i ] = 0x800000FF;
}
texture.loadImageArray( pixels, 128, 128 );

By padding the pixels with this value I would expect to see a slightly dark red color.

red = 0x80, green = 0x00, blue = 0x00, alpha = 0xFF

But instead I get a transparent red,

alpha = 0x80, blue = 0x00, green = 0x00, red = 0xFF

Rather than using raw unsigned ints I made a structure to help me handling individual channels:

struct Color4C {
    unsigned char alpha;
    unsigned char blue;
    unsigned char green;
    unsigned char red;
    ...
};

I can easily replace an array of unsigned ints with an array of Color4C and the result is the same. If I invert the order of the channels (red first, alpha last) then I can easily pass 0xRRGGBBAA and make it work.

The easy solution is to simply handle these values in ABGR format. But I also find it easier to work with RGBA values. If I want to use hardcoded color values I would prefer to write them like 0xRRGGBBAA and not 0xAABBGGRR.

But let's say I start using the ABGR format. If I were to run my code in another machine, would I suddenly see strange colors wherever I changed pixels/channels directly? Is there a better solution?

Upvotes: 5

Views: 4029

Answers (1)

idbrii
idbrii

Reputation: 11916

Promoting some helpful comments to an answer:

0xRR,0xGG,0xBB,0xAA on your (Intel, little-endian) machine is 0xAABBGGRR. You've already found the information saying that GL_UNSIGNED_BYTE preserves the format of binary blocks of data across machines, while GL_UNSIGNED_INT_8_8_8_8 preserves the format of literals like 0xRRGGBBAA. Because different machines have different correspondence between binary and literals, it is absolutely impossible for you to have both types of portability at once. – Ben Voigt

[Writing 0xAABBGGRR would actually be RGBA in your machine, but running this code in another machine could show different results] because OpenGL will reinterpret it as UNSIGNED_BYTE. C++ needs an endianess library, after all. – WorldSEnder

Upvotes: 1

Related Questions