Reputation: 2677
I'm building some sort of censoring app. I've gotten so far that i can completele pixelate an image taken with my iPhone.
But I want to achieve in the end an image like this: http://images-mediawiki-sites.thefullwiki.org/11/4/8/8/8328511755287292.jpg
So my thought was to fully pixelate my image and then add a mask on top of it, to achieve the desired effect. So in terms of layers it goes like: originalImage + maskedPixelatedVersionOfImage.. I was thinking to animate the mask when touching the image, to scale the mask to the desired size. The longer you hold your finger on the image, the bigger the mask becomes...
After some searching, I guess this can be done using CALayers and CAAnimation. But how do I then composite those layers to an image that I can save in the photoalbum on the iphone?
Am I taking the right approach here?
EDIT:
Okay, I guess Ole's solution is the correct one, though I'm still not getting what I want: the code I use is:
CALayer *maskLayer = [CALayer layer];
CALayer *mosaicLayer = [CALayer layer];
// Mask image ends with 0.15 opacity on both sides. Set the background color of the layer
// to the same value so the layer can extend the mask image.
mosaicLayer.contents = (id)[img CGImage];
mosaicLayer.frame = CGRectMake(0,0, img.size.width, img.size.height);
UIImage *maskImg = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"mask" ofType:@"png"]];
maskLayer.contents = (id)[maskImg CGImage];
maskLayer.frame = CGRectMake(100,150, maskImg.size.width, maskImg.size.height);
mosaicLayer.mask = maskLayer;
[imageView.layer addSublayer:mosaicLayer];
UIGraphicsBeginImageContext(imageView.layer.bounds.size);
[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *saver = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
So in my imageView i did: setImage, which has the original (unedited) version of the photo. On top of that i add a sublayer, mosaicLayer, which has a mask property: maskLayer. I thought by rendering the rootlayer of the imageView, everything would turn out ok. Is that not correct?
Also, I figured out something else: my mask is stretched and rotated, which i'm guessing has something to do with imageOrientation? I noticed by accidentally saving mosaicLayer to my library, which also explains the problem I had that the mask seemed to mask the wrong part of my image...
Upvotes: 1
Views: 8140
Reputation: 170319
If you're willing to drop support for pre-iPhone 3G S devices (iPhone and iPhone 3G), I'd suggest using OpenGL ES 2.0 shaders for this. While it may be easy to overlay a CALayer containing a pixelated version of the image, I think you'll find the performance to be lacking.
In my tests, performing a simple CPU-based calculation on every pixel of a 480 x 320 image led to a framerate of about 4 FPS on an iPhone 4. You might be able to sample only a fraction of these pixels to achieve the desired effect, but it still will be a slow operation to redraw a pixelated image to match the live video.
Instead, if you use an OpenGL ES 2.0 fragment shader to process the incoming live video image, you should be able to take in the raw camera image, apply this filter selectively over the desired area, and either display or save the resulting camera image. This processing will take place almost entirely on the GPU, which I've found to do simple operations like this at 60 FPS on the iPhone 4.
While getting a fragment shader to work quite right can require a little setup, you might be able to use this sample application I wrote for processing camera input and doing color tracking to be a decent starting point. You might also look at the touch gesture I use there, where I take the initial touch down point to be the location to center an effect around and then a subsequent drag distance to control the strength or radius of an effect.
Upvotes: 4
Reputation: 135548
To render a layer tree, put all layers in a common container layer and call:
UIGraphicsBeginImageContext(containerLayer.bounds.size);
[containerLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Upvotes: 7