Reputation: 1751
We have an app that mimics the behavior of the Camu app, where filters are masked on top of each other when swiping thorugh them. However, in our app we only do this for already captured images and thus not for a live camera preview, like Camu also does.
When the user has selected an image, we preprocess all filters in advance (in our case 8), using lookup tables (LUT), so that they can be quickly swiped through without waiting for each one to be processed first. Everything works as intended. However, the app consumes an enormous amount of memory when processing the filters (at least 100MB). This is a problem on older devices with only 512MB of RAM, which too often results in crashes due to memory pressure.
What we cannot understand is, why this has to consume so much memory, when the GPUImage framework can easily process a live camera feed with applied filters at 30 fps with no hiccups whatsoever.
We have tried several things in order to reduce this excessive memory consumption, like processing the image at lower resolutions (which does have a positive impact, but not enough), and outputting the filtered images to a GPUImageView
instead of a UIImageView
.
Either we are simply using it wrong or maybe we just do not understand the inner workings of this otherwise brilliant framework.
This is the code we use for processing the filters:
- (UIImage *)image:(GPUImagePicture *)originalImageSource filteredWithLUTNamed:(NSString *)lutName
{
GPUImagePicture *lookupImageSource = [[GPUImagePicture alloc] initWithImage:[UIImage imageNamed:lutName]];
GPUImageLookupFilter *lookupFilter = [[GPUImageLookupFilter alloc] init];
if (originalImageSource.outputImageSize.height > kMaxProcessingSize || originalImageSource.outputImageSize.width > kMaxProcessingSize)
[lookupFilter forceProcessingAtSizeRespectingAspectRatio:_maxProcessingSize];
[originalImageSource addTarget:lookupFilter];
[lookupImageSource addTarget:lookupFilter];
[lookupFilter useNextFrameForImageCapture];
[originalImageSource processImage];
[lookupImageSource processImage];
return [lookupFilter imageFromCurrentFramebuffer];
}
According to our profiling with Instruments, most of the memory is consumed around the processImage
and imageFromCurrentFramebuffer
methods.
Below is an example of the memory spike caused by the above code:
We would greatly appreciate any feedback, that can lead us to either make us understand why this is the case or to an actual solution that would greatly minimize the memory consumption.
Upvotes: 1
Views: 1550
Reputation: 170317
In the above code, there are two places where a significant amount of memory is being allocated temporarily: your creation of a UIImage from a file using -imageNamed:
, and the subsequent creation of a GPUImagePicture based on that (which leads to an upload of an OpenGL texture for that image). This will lead to a spike in memory as those are allocated.
These should be deallocated after the method has run to completion, but the use of -imageNamed:
can lead to caching, which might cause your lookup images to hang around in memory longer than you expect. You can trigger an explicit purging of this cache by calling
[[GPUImageContext sharedFramebufferCache] purgeAllUnassignedFramebuffers];
You can experiment with calling this before or after the above method is used.
I use a framebuffer caching mechanism behind the scenes to cause OpenGL framebuffers and their attached textures to be recycled, but the creation of a series of images one after another may short-circuit this caching and cause these framebuffers to build up in cache.
Finally, you'll be creating an autoreleased UIImage at the -imageFromCurrentFramebuffer
line near the end. If the output UIImage from this method isn't being disposed of before you get a new one from this method, these large images will accumulate in memory. You may need to wrap the above in an explicit autorelease pool that is drained as you process images to make sure that the UIImages are being deallocated each pass through this method.
Upvotes: 1