x-x
x-x

Reputation: 7515

Why is drawing an 8bit texture with OpenGL drawing black pixels instead of transparent?

OpenGL Setup:

glEnable( GL_BLEND );
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
glEnable( GL_DEPTH_TEST );
glEnable( GL_TEXTURE_2D );
glEnable( GL_CULL_FACE );
glCullFace( GL_BACK );
glClearColor( 0.0, 1.0, 1.0, 1.0 );

Initializing the texture:

// 16x16 X pattern
uint8_t buffer[ 16*16 ] =
{
    255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255,
    0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0,
    0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0,
    0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0,
    0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0,
    0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0,
    0, 0, 0, 0, 0, 0, 255, 0, 0, 255, 0, 0, 0, 0, 0, 0,
    0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0,
    0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0,
    0, 0, 0, 0, 0, 0, 255, 0, 0, 255, 0, 0, 0, 0, 0, 0,
    0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0,
    0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0,
    0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0,
    0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0,
    0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0,
    255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255,
};

GLuint texture_id;
glGenTextures( 1, &texture_id );
glBindTexture( GL_TEXTURE_2D, texture_id );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8, 16, 16, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );

The textures are drawn as quads using glBegin/glEnd. The color for each vertex is white with full alpha: {r=255,g=255,b=255,a=255}.

Here's an example scene. The photo and cheese are both loaded from PNG images. The cheese has transparent holes, which show the photo and background behind it. The X pattern I expected to be transparent too:

Example

Why is the quad black instead of transparent, any how can I fix my code to draw what I was expecting?

This question may be similar, but so far I am unable to apply the brief answer to my current problem.

Update: I think I solved it, thanks to the answers below. I changed...

glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8, 16, 16, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer );

to

glTexImage2D( GL_TEXTURE_2D, 0, GL_ALPHA, 16, 16, 0, GL_ALPHA, GL_UNSIGNED_BYTE, buffer );

... which gives the desired result.

Upvotes: 5

Views: 9447

Answers (4)

Neil Roy
Neil Roy

Reputation: 632

Eight bit images do not use colours, they have a colour palette. Zero represents index 0 of the colour palette for that image, which will contain 256 colours. Eight bit images do not have an alpha channel for transparency. The PNG image you used has an alpha channel. It would be a 32bit image which has 8 bits for red, 8 bits for green, 8 bits for blue (24bits for colour) and 8 bits for alpha. You're mixing the two formats.

Upvotes: 0

Dietrich Epp
Dietrich Epp

Reputation: 213578

Change GL_RGBA8 (which makes a grayscale image with no alpha channel) to GL_INTENSITY:

glTexImage2D(GL_TEXTURE_2D, 0, GL_INTENSITY, 16, 16, 0,
             GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer);

The GL_RGBA8 format, created from GL_LUMINANCE, gives pixels of the form (Y, Y, Y, 1), but GL_INTENSITY with GL_LUMINANCE gives (Y, Y, Y, Y).

You will also want to change your blend mode to assume premultiplied alpha, e.g. change

glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

to:

glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

Alternative:

You can also use GL_ALPHA, and then use your ordinary blending mode for non-premultiplied alpha:

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, 16, 16, 0,
             GL_ALPHA, GL_UNSIGNED_BYTE, buffer);

Alternative #2:

You can keep using GL_LUMINANCE, and change the blending mode.

glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_COLOR);

This has the drawback that you can't color the texture without using something like glBlendColor (which is not part of the OpenGL headers that ship with MSVC, so you have to use GLEW or something like that):

glBlendColor(...);
glBlendFunc(GL_CONSTANT_COLOR, GL_ONE_MINUS_SRC_COLOR);

Alternative #3:

Use OpenGL 3, and change your fragment shader to handle single-channel textures in the desired way.

Upvotes: 3

SigTerm
SigTerm

Reputation: 26429

Becuse luminance is color, and transparency (alpha) is not color.

If you want transparency, use different format - GL_LUMANANCE_ALPHA or something similar, and store transparency in different channel.

Also, this is already explained in documentation.

GL_LUMINANCE

Each element is a single luminance value. The GL converts it to floating point, then assembles it into an RGBA element by replicating the luminance value three times for red, green, and blue and attaching 1 for alpha. Each component is then multiplied by the signed scale factor GL_c_SCALE, added to the signed bias GL_c_BIAS, and clamped to the range [0,1] (see glPixelTransfer).

--edit--

Any way to keep the 8bit per pixel and achieve the same effect?

I think you could "tell" to OpenGL that initial image is indexed color and then set up proper RGBA palette (where for every element R==G==B==A==index). See glPixelTransfer and glPixelMap for details.

Upvotes: 3

datenwolf
datenwolf

Reputation: 162309

@Dietrich Epp's answer was almost correct, only that the format you're looking for is GL_ALPHA:

glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA8, 16, 16, 0, GL_ALPHA, GL_UNSIGNED_BYTE, buffer);

Then set the blend function to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); or glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);, depending on if your values are premultiplied. Last but not least set the texture environment to GL_MODULATE

glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);

Now you can set the color of the 'X' with glColor.

Another approach is not using blending but alpha testing.

Upvotes: 6

Related Questions