Reputation: 2772
I am trying to find a faster way to generate Gaussian Blur
image, this blog works great with most of the image.
But when image has transparent backgroudcolor, the blured image looks bad
The code below is copied from the blog:
-(UIImage *)vImageBlurWithNumber:(CGFloat)blur
{
if (blur < 0.f || blur > 1.f) {
blur = 0.5f;
}
int boxSize = (int)(blur * 100);
boxSize = boxSize - (boxSize % 2) + 1;
CGImageRef img = self.CGImage;
vImage_Buffer inBuffer, outBuffer;
vImage_Error error;
void *pixelBuffer;
CGDataProviderRef inProvider = CGImageGetDataProvider(img);
CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);
inBuffer.width = CGImageGetWidth(img);
inBuffer.height = CGImageGetHeight(img);
inBuffer.rowBytes = CGImageGetBytesPerRow(img);
inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);
pixelBuffer = malloc(CGImageGetBytesPerRow(img) *
CGImageGetHeight(img));
if(pixelBuffer == NULL)
NSLog(@"No pixelbuffer");
outBuffer.data = pixelBuffer;
outBuffer.width = CGImageGetWidth(img);
outBuffer.height = CGImageGetHeight(img);
outBuffer.rowBytes = CGImageGetBytesPerRow(img);
// may be i should modify last 2 parameter below ,how ?
error = vImageBoxConvolve_ARGB8888(&inBuffer,
&outBuffer,
NULL,
0,
0,
boxSize,
boxSize,
NULL,
kvImageEdgeExtend); // kvImageBackgroundColorFill ?
if (error) {
NSLog(@"error from convolution %ld", error);
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(
outBuffer.data,
outBuffer.width,
outBuffer.height,
8,
outBuffer.rowBytes,
colorSpace,
kCGImageAlphaNoneSkipLast);
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
//clean up
// CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(pixelBuffer);
CFRelease(inBitmapData);
CGColorSpaceRelease(colorSpace);
CGImageRelease(imageRef);
return returnImage;
}
I have tried another way to make Gaussian Blur
Effect ,
by using Apple WWDC 2013 UIImage-ImageEffects
category,
but the effect of this category is more like Frosted glass
instead of Gaussian Blur
.
Blur in Core Image
works fine, but it is so slow than the vImage way.
GPUImage
also slower than vImage way.
Please help me to modified the vImage code above, i have tried a lot ,and post the code with demo here;
:)
Upvotes: 0
Views: 545
Reputation: 21
Blurs may look bad if the input data is not premultiplied. CG prefers premultiplied data for this reason and its contexts can not be made non-premultiplied, but some image providers are not premultiplied, so the data can come in a variety of ways. What you get from the source image depends on the image format, which is why you need to look at the image's alpha info.
The problem occurs for non-opaque non-premultiplied pixels, for which some color will be averaged in to the pixels around it. However if the pixel was fully transparent, its color information is meaningless, and anything could be encoded in there. (Some artists go out of their way to fill such regions with magenta!) If it was partially opaque, then the color is merely overrepresented. The solution is to premultiply the data first. That makes sure that transparent pixels are not overrepresented in the result color of their neighbors, and fully transparent ones don't contribute at all.
Premultiplication might be inserted before your convolution something like this:
CGImageAlphaInfo alpha = CGImageGetAlphaInfo(img);
vImage_Buffer convolutionSrcBufffer = inBuffer;
switch(alpha)
{
case kCGImageAlphaLast:
vImageBuffer_Init( &convolutionSrcBuffer, inBuffer.height, inBuffer.width, 32, kvImageNoFlags );
vImagePremultiplyData_RGBA8888( &inBuffer, & convolutionSrcBuffer, kvImageNoFlags);
alpha = kCGImageAlphaPremultipliedLast;
break;
case kCGImageAlphaFirst:
vImageBuffer_Init( &convolutionSrcBuffer, inBuffer.height, inBuffer.width, 32, kvImageNoFlags );
vImagePremultiplyData_ARGB8888( &inBuffer, & convolutionSrcBuffer, kvImageNoFlags);
alpha = kCGImageAlphaPremultipliedFirst;
break;
default:
// do nothing
break;
}
...
error = vImageBoxConvolve_ARGB8888(&convolutionSrcBufffer,
&outBuffer,
NULL,
0,
0,
boxSize,
boxSize,
NULL,
kvImageEdgeExtend); // kvImageBackgroundColorFill ?
if( convolutionSrcBuffer.data != inBuffer.data)
free(convolutionSrcBuffer.data);
Another source of error in your code is you may have taken a non-premultiplied or premultiplied image and slammed it into a opaque kCGImageAlphaNoneSkipLast context, causing the context to treat the transparent image as opaque even if it isn't. The image you get may not draw correctly. So use the actual alpha for the image content instead:
...
CGContextRef ctx = CGBitmapContextCreate(
outBuffer.data,
outBuffer.width,
outBuffer.height,
8,
outBuffer.rowBytes,
colorSpace, // <<== wrong colorspace!
alpha); // <<=== changed this line
Note that the CGImageContextRef is quite picky about which alpha's it accepts, so the above will fail in some cases. These headaches can be avoided by using vImageCreateCGImageFromBuffer() instead. Also while we are at it, if you had gotten the image with vImageBuffer_InitWithCGImage(), we could have asked for the data to be in a premultiplied format in the first place and also make other minor adjustments like alpha first/last if we want.
Some images are rotated. Your code doesn't handle that.
You also threw away the image colorspace! Use the one in the source image, not device RGB. Even if everything else was correct, this will still make stuff look bad. You can use vImageBuffer_InitWithCGImage() to convert the colorspace to device RGB if that is really the colorspace you want. It would be better to find out what the view graphics context colorspace is and use that instead.
For completeness, I should point out that the box convolve is very much not a gaussian blur. If you run the filter three or four times, it will start to look like one though. https://nghiaho.com/?p=1159. If you are going to do that, you might try the tent blur instead, which is like running the box blur twice, but a bit more efficient.
Upvotes: 0
Reputation: 3088
You could also do something like:
var noAlpha = CIVector(x: 0, y: 0, z: 0, w: 1)
image.imageByApplyingFilter("CIColorMatrix",
withInputParameters: ["inputAVector": noAlpha])
to get rid of the alpha. If you need to chain with other CIFilter
s this would be more efficient.
Upvotes: 0
Reputation: 1592
It would be useful to see what looks bad about it.
If it is just regions near the transparent areas, you might try to premultiply the image first. That will keep the meaningless colors in the fully transparent areas from bleeding into the rest of the image.
Upvotes: 0