Reputation: 1264
My aim is to use AVFoundation
to capture and display (using an overlay view) the image captured - should be identical to the one in the preview layer.
Working with the iPhone 4" screen size is fine as it simply involves a resize of the captured image. However, working with the iPhone 3.5" screen size proves more complex - requiring a resize and crop.
Whilst I have code that works for both camera positions (front & back), the code for resizing/cropping the image taken by the back camera has some performance issues. I have narrowed the performance issue down to the larger image context when resizing the image. The larger image context is needed to keep the image 'retina' quality, whereas the front camera is so poor the image captured isn't of 'retina' quality anyway.
The code in question is:
UIGraphicsBeginImageContext(CGSizeMake(width, height))
// where: width = 640, height = 1138
// image dimensions = 1080 x 1920
image.drawInRect(CGRectMake(0, 0, width, height))
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
I've searched around but cannot find a different, more efficient way of doing this. Can anyone help me overcome this performance issue? Thanks
Upvotes: 3
Views: 1047
Reputation: 299275
It's not completely clear from your question what your ultimate input and output are, and what your real-time requirements are. I'll try to touch on the major points you need with some assumptions about what you're doing.
There's a decision that isn't clear in your question, but that you must make. If you get behind in processing, do you drop frames, drop quality, or lag the input? Each of these is a valid approach, but you will eventually have to do one of them. If you don't choose, a choice will be made for you (usually to lag the input, but AVFoundation may start dropping frames for you, depending on where you're doing your processing).
To the specific question about cropping, the tool you probably want is CGImageCreateWithImageInRect
. That's almost certainly going to be much faster than your current solution for cropping (assuming that you can resize in time, which you suggest you can).
CGImageCreateWithImageInRect
is a special case because it just peeks into an existing image, and so can be very fast. In general, though, you should strongly avoid creating new CGImage
or UIImage
any more than you absolutely have to. What you generally want to do is work with the underlying CIImage
. For instance, you can transform a CIImage
with imageByApplyingTransform
to scale it and imageByCroppingToRect
to crop it. CIImage
avoids creating images until it absolutely has to. As the docs say, it's really a "recipe" for an image. You can chain together various filters and adjustments, and then apply them all in one big GPU transfer. Moving data over to the GPU and back to the CPU is insanely expensive; you want to do it exactly one time. And if you get behind and have to drop the frame, you can drop the CIImage
without ever having to render it.
If you have access to the YUV data from the camera (if you're working with the CVPixelBuffer
), then CIImage
is even more powerful, because it can avoid doing an RBGA transform. CIImage
also has the ability to turn off color management (which may not matter if you're just resizing and cropping, but does matter if you modify color at all). Turning off color management can be a big win. For real-time work, CoreImage can also work in a EAGLContext
and live on the GPU rather than having to copy back and forth to the CPU. If performance is eating you up, you want this stuff.
First, read over the Core Image Programming Guide to get a sense of what you can do. Then I recommend "Core Image Effects and Techniques" from WWDC 2013 and "Advances in Core Image" from WWDC 2014.
Upvotes: 5