Patrick T Nelson
Patrick T Nelson

Reputation: 1254

OpenGL ES on iOS saved image has incorrect result (Black color on partially transparent edges)

I'm building on top of Apple's GLPaint example for an app with a drawing view. I use this to clear:

glClearColor(0.0f, 0.0f, 0.0f, 0.0f);

This allows me to put an image behind what is being drawn, I later merge the two images, but when I save the mostly transparent image from the OpenGL context, any pixel that is partially transparent appears to blend with black. (Changing the clear color to transparent white makes it blend with white). I've tried playing with many options and haven't found a work around.

Here are two screenshots, the first is what the drawing app looks like, the 2nd is what I get when I attempt to save an image from the OpenGL context.

From screenshot: Image as seen on iOS

From glReadPixels, being displayed in a UIImageView: Image I get from glReadPixels

Relevant Code:

-(UIImage*)mergeImage:(UIImage*)image1 withImage:(UIImage*)image2{

CGSize size = image1.size;

// UIGraphicsBeginImageContext(size); UIGraphicsBeginImageContextWithOptions(size, NO, 1.0f);

[image1 drawAtPoint:CGPointMake(0.0f, 0.0f)];
[image2 drawAtPoint:CGPointMake(0.0f, 0.0f)];

UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();


return result;

}

-(UIImage *) glToUIImage{

int imageWidth, imageHeight;

imageWidth = self.frame.size.width;
imageHeight = self.frame.size.height;

NSInteger myDataLength = imageWidth * imageHeight * 4;


// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, imageWidth, imageHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.

GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);

for(int y = 0; y < imageHeight; y++){
    for(int x = 0; x < imageWidth * 4; x++){
        buffer2[((imageHeight - 1) - y) * imageWidth * 4 + x] = buffer[y * 4 * imageWidth + x];
    }
}

// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
//CGDataProviderRef provider2 = CGDataProviderCreateWithData(NULL, white, myDataLength, NULL);

// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * imageWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;

// make the cgimage
CGImageRef imageRef = CGImageCreate(imageWidth, imageHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);


UIImage *myImage = [UIImage imageWithCGImage:imageRef];

CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);

UIImageWriteToSavedPhotosAlbum(myImage, NULL, NULL, NULL);

CGDataProviderRelease(provider);
free(buffer);
free(buffer2);

return myImage;

}

-(UIImage *) glToUIImageOnWhite{

int imageWidth, imageHeight;

imageWidth = self.frame.size.width;
imageHeight = self.frame.size.height;

NSInteger myDataLength = imageWidth * imageHeight * 4;


// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, imageWidth, imageHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.

GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);

GLubyte R, G, B, A;

for(int y = 0; y < imageHeight; y++){
    for(int x = 0; x < imageWidth*4; x+=4){
        R = buffer[y * 4 * imageWidth + x];
        G = buffer[y * 4 * imageWidth + x +1];
        B = buffer[y * 4 * imageWidth + x +2];
        A = buffer[y * 4 * imageWidth + x +3];

        buffer2[((imageHeight - 1) - y) * imageWidth * 4 + x]    = 255-(A-R);
        buffer2[((imageHeight - 1) - y) * imageWidth * 4 + x +1] = 255-(A-G);
        buffer2[((imageHeight - 1) - y) * imageWidth * 4 + x +2] = 255-(A-B);

        buffer2[((imageHeight - 1) - y) * imageWidth * 4 + x +3] = 255;
    }
}

// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
//CGDataProviderRef provider2 = CGDataProviderCreateWithData(NULL, white, myDataLength, NULL);

// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * imageWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;

// make the cgimage
CGImageRef imageRef = CGImageCreate(imageWidth, imageHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);

UIImage *myImage = [UIImage imageWithCGImage:imageRef];

CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);


CGDataProviderRelease(provider);

free(buffer);
free(buffer2);

return myImage;

}

Upvotes: 0

Views: 611

Answers (1)

Tark
Tark

Reputation: 5175

The alpha regions are too dark, it is possible the color buffer has premultiplied alpha. In that case, your CGImage should be created using:

CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;

Upvotes: 2

Related Questions