user1139479
user1139479

Reputation: 210

GPUImage Camera Takes Five Seconds

On an iPad Retina Display (the device, not simulator), I had first used Apple's AVFoundation to take still pictures, but I switched to GPUImage because I wanted the ChromaKeyBlend feature. Got that running. BUT, the issue is that when I tap my Camera button, with AVFoundation, the camera appeared immediately and with GPUImage it takes FIVE seconds!

Is that loading time to be expected? I understand it has to be synchronous and can't be in the background.

So, what are others doing to speed that up, or are they just putting an activity indicator on the screen and make the user wait those five seconds?

Any tips would be appreciated. Thanks!

Upvotes: 3

Views: 2506

Answers (2)

user1139479
user1139479

Reputation: 210

Well, I am loading an image into GPUImagePicture, but I think I have the pipeline right, and I really like the real-time adjustment of the sensitivity (with a slider). As I said, I tried to preprocess the image in the background and shaved off some seconds (this takes 5 sec still, even if I use a completely transparent image at the same size). Hope there is some secret sauce ;)

stillCamera = [[GPUImageStillCamera alloc] init];
stillCamera.outputImageOrientation = UIInterfaceOrientationLandscapeLeft;

UIImage *inputImage = [UIImage imageNamed:@"RedCurtain-60-8x10.jpg"];  // 346kb
self.sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
[self.sourcePicture processImage];

self.chromaKeyBlendFilter = [[GPUImageChromaKeyBlendFilter alloc] init];
[self.chromaKeyBlendFilter setColorToReplaceRed:0.0 green:1.0 blue:0.0];
[self.chromaKeyBlendFilter setThresholdSensitivity:0.35f];

[stillCamera addTarget:self.chromaKeyBlendFilter];
[self.sourcePicture addTarget:self.chromaKeyBlendFilter];
[self.chromaKeyBlendFilter addTarget:(GPUImageView *)self.videoPreviewView];

Upvotes: 0

Brad Larson
Brad Larson

Reputation: 170319

It sounds like you're not using a GPUImageStillCamera as your input to the GPUImage pipeline. Using a UIImage, particularly one passed into a new GPUImagePicture instance with smooth scaling set to YES, will be much, much slower than having the photo be captured and processed directly from the camera. Capturing from the camera via AV Foundation, converting that to a UIImage, then re-uploading that UIImage to the GPU through a GPUImagePicture introduces a significant amount of unnecessary overhead. Use a GPUImageStillCamera instead for best performance.

Look at the SimplePhotoFilter example for how this is done. When I test that application on my Retina iPad (3rd generation, not 4th), it takes a total of 0.9 seconds to take, filter, and return a full photo, and an additional 0.6 seconds to save that photo to the camera roll.

Upvotes: 4

Related Questions