Reputation: 4029
I have a little problem with my pixellation image processing algorithm.
I load the image from the beginning into an array of type unsigned char*
After that, when needed, I modify this data and have to update the image.
This updating takes too long. This is how I am doing it:
CGDataProviderRef dataProvider = CGProviderCrateWithData(.....);
CGImageRef cgImage = CGImageCreate(....);
[imageView setImage:[UIImage imageWithCGImage:cgImage]]];
Everything is working but it's very slow to process a large image. I tried running this on a background thread, but that didn't help.
So basically, this takes too long. Does anyone have any idea how to improve it?
Upvotes: 4
Views: 5811
Reputation: 1453
Actually, it's simple as this. Higher input scale key means more pixellation.
let filter = CIFilter(name: "CIPixellate")
filter?.setValue(inputImage, forKey: kCIInputImageKey)
filter?.setValue(30, forKey: kCIInputScaleKey)
let pixellatedCIImage = filter?.outputImage
The result is CIImage, you can convert it using UIImage using
UIImage(ciImage: pixellatedCIImage)
Upvotes: 1
Reputation: 86
Converted @Kai Burghardt's answer to Swift 3
func pixelateImage(_ image: UIImage, withIntensity intensity: Int) -> UIImage {
// initialize context and image
let context = CIContext(options: nil)
let logo = CIImage(data: UIImagePNGRepresentation(image)!)!
// set filter and properties
let filter = CIFilter(name: "CIPixellate")
filter?.setValue(logo, forKey: kCIInputImageKey)
filter?.setValue(CIVector(x:150,y:150), forKey: kCIInputCenterKey)
filter?.setValue(intensity, forKey: kCIInputScaleKey)
let result = filter?.value(forKey: kCIOutputImageKey) as! CIImage
let extent = result.extent
let cgImage = context.createCGImage(result, from: extent)
// result
let processedImage = UIImage(cgImage: cgImage!)
return processedImage
}
calling this code as
self.myImageView.image = pixelateImage(UIImage(named:"test"),100)
Upvotes: 1
Reputation: 1503
How about the use the Core Image
filter named CIPixellate
?
Here is a code snippet of how i implemented it. You can play with kCIInputScaleKey
to get the intensity you want:
// initialize context and image
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *logo = [CIImage imageWithData:UIImagePNGRepresentation([UIImage imageNamed:@"test"])];
// set filter and properties
CIFilter *filter = [CIFilter filterWithName:@"CIPixellate"];
[filter setValue:logo forKey:kCIInputImageKey];
[filter setValue:[[CIVector alloc] initWithX:150 Y:150] forKey:kCIInputCenterKey]; // default: 150, 150
[filter setValue:[NSNumber numberWithDouble:100.0] forKey:kCIInputScaleKey]; // default: 8.0
// render image
CIImage *result = (CIImage *) [filter valueForKey:kCIOutputImageKey];
CGRect extent = result.extent;
CGImageRef cgImage = [context createCGImage:result fromRect:extent];
// result
UIImage *image = [[UIImage alloc] initWithCGImage:cgImage];
Here is the official Apple Filter Tutorial and a List of available Filters.
I just wrote a method to execute the rendering work in background:
- (void) pixelateImage:(UIImage *) image withIntensity:(NSNumber *) intensity completionHander:(void (^)(UIImage *pixelatedImage)) handler {
// async task
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// initialize context and image
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *logo = [CIImage imageWithData:UIImagePNGRepresentation(image)];
// set filter and properties
CIFilter *filter = [CIFilter filterWithName:@"CIPixellate"];
[filter setValue:logo forKey:kCIInputImageKey];
[filter setValue:[[CIVector alloc] initWithX:150 Y:150] forKey:kCIInputCenterKey]; // default: 150, 150
[filter setValue:intensity forKey:kCIInputScaleKey]; // default: 8.0
// render image
CIImage *result = (CIImage *) [filter valueForKey:kCIOutputImageKey];
CGRect extent = result.extent;
CGImageRef cgImage = [context createCGImage:result fromRect:extent];
// result
UIImage *image = [[UIImage alloc] initWithCGImage:cgImage];
// dispatch to main thread
dispatch_async(dispatch_get_main_queue(), ^{
handler(image);
});
});
}
Call it like this:
[self pixelateImage:[UIImage imageNamed:@"test"] withIntensity:[NSNumber numberWithDouble:100.0] completionHander:^(UIImage *pixelatedImage) {
self.logoImageView.image = pixelatedImage;
}];
Upvotes: 4
Reputation: 170317
As others have suggested, you'll want to offload this work from the CPU to the GPU in order to have any kind of decent processing performance on these mobile devices.
To that end, I've created an open source framework for iOS called GPUImage that makes it relatively simple to do this kind of accelerated image processing. It does require OpenGL ES 2.0 support, but every iOS device sold for the last couple of years has this (stats show something like 97% of all iOS devices in the field do).
As part of that framework, one of the initial filters I've bundled is a pixellation one. The SimpleVideoFilter sample application shows how to use this, with a slider that controls the pixel width in the processed image:
This filter is the result of a fragment shader with the following GLSL code:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp fractionalWidthOfPixel;
void main()
{
highp vec2 sampleDivisor = vec2(fractionalWidthOfPixel);
highp vec2 samplePos = textureCoordinate - mod(textureCoordinate, sampleDivisor);
gl_FragColor = texture2D(inputImageTexture, samplePos );
}
In my benchmarks, GPU-based filters like this perform 6-24X faster than equivalent CPU-bound processing routines for images and video on iOS. The above-linked framework should be reasonably easy to incorporate in an application, and the source code is freely available for you to customize however you see fit.
Upvotes: 16
Reputation: 8677
The iPhone is not a great device to be doing computationally–intensive tasks like image manipulation. If you're looking to improve the performance in displaying very high resolution images—possibly while performing some image processing tasks at the same time, look into using CATiledLayer. It's made to display the contents in tiled chunks so you can display/process content data only as needed on individual tiles.
Upvotes: 1
Reputation: 6405
I agree with @Xorlev. The only thing I would hope is (provided that you are using a lot of floating point operations) that you are building for arm6 and using thumb isa. In that case compile without -mthumb option and the performance might improve.
Upvotes: 0