user3714284
user3714284

Reputation: 3

GPUImage performance

I'm using GPUImage to apply filters and chain filters on the images. I'm using UISlider to change the value of the filters and applying the filters continuously on the image as slider's values changes. So that user can see what's the output as he changes the value.

This is causing very slow processing and sometimes UI hangs or event app crashes on receiving low memory warning.

How can I achieve fast filter implementation using GPUImage. I have seem some Apps which are applying filters on the go and their UI doesn't even hang for second.

Thanks,

Here's the sample code which I'm using as slider's value changes.

- (IBAction) foregroundSliderValueChanged:(id)sender{


    float value = ([(UISlider *)sender maximumValue] - [(UISlider *)sender value]) + [(UISlider *)sender minimumValue];

    [(GPUImageVignetteFilter *)self.filter setVignetteEnd:value];

    GPUImagePicture *filteredImage = [[GPUImagePicture alloc]initWithImage:_image];
    [filteredImage addTarget:self.filter];
    [filteredImage processImage];
    self.imageView.image  = [self.filter imageFromCurrentlyProcessedOutputWithOrientation:_image.imageOrientation];


}

Upvotes: 0

Views: 1472

Answers (1)

Brad Larson
Brad Larson

Reputation: 170319

You haven't specified how you set up your filter chain, what filters you use, or how you're doing your updates, so it's hard to provide all but the most generic advice. Still, here goes:

  • If processing an image for display to the screen, never use a UIImageView. Converting to and from a UIImage is an extremely slow process, and one that should never be used for live updates of anything. Instead, go GPUImagePicture -> filters -> GPUImageView. This keeps the image on the GPU and is far more efficient, processing- and memory-wise.
  • Only process as many pixels as you actually will be displaying. Use -forceProcessingAtSize: or -forceProcessingAtSizeRespectingAspectRatio: on the first filter in your chain to reduce its resolution to the output resolution of your GPUImageView. This will cause your filters to operate on image frames that are usually many times smaller than your full-resolution source image. There's no reason to process pixels you'll never see. You can then pass in a 0 size to these same methods when you need to finally capture the full-resolution image to disk.
  • Find more efficient ways of setting up your filter chain. If you have a common set of simple operations that you apply over and over to your images, think about creating a custom shader that combines these operations, as appropriate. Expensive operations also sometimes have a cheaper substitute, like how I use a downsampling-then-upsampling pass for GPUImageiOSBlur to use a much smaller blur radius than I would with a stock GPUImageGaussianBlur.

Upvotes: 5

Related Questions