Reputation: 11949
I am working a picture/video filter effect project. For making effect i am using GPUImage project. It works fine for pictures. Now i need to do the same effect on videos too. I grab images from video at 30 frames per second. Now for 1 min video i filtered about 1800 images. and filtering, for each image i allocate GPUImagePicture and GPUImageSepiaFilter classes, and release them manully. But these allocations is not released, and after processing on about 20 sec video application crashes due to memory warning. Is it possible to allocate GPUImagePicture and Filter class once for makeing filter on all images. if yes, How? Please tell me it will very helpfull. Here is my code what actully i am doing...
The get image method is called by a NSTimer which is called 30 times per second, and getting image from app document directory and send for filtering to - (void)imageWithEffect:(UIImage *)image method.
-(void)getImage
{
float currentPlayBacktime = slider.value;
int frames = currentPlayBacktime*30;
NSString *str = [NSString stringWithFormat:@"image%i.jpg",frames+1];
NSString *fileName = [self.imgFolderPath stringByAppendingString:str];
if ([fileManager fileExistsAtPath:fileName])
[self imageWithEffect:[UIImage imageWithContentsOfFile:fileName]];
}
- (void)imageWithEffect:(UIImage *)image
{
GPUImagePicture *gpuPicture = [[GPUImagePicture alloc]initWithImage:img] ;
GPUImageSepiaFilter *filter = [[[GPUImageSepiaFilter alloc]init] autorelease];
gpuPicture = [gpuPicture initWithImage:image];
[gpuPicture addTarget:filter];
[gpuPicture processImage];
playerImgView.image = [filter imageFromCurrentlyProcessedOutputWithOrientation:0];
[gpuPicture removeAllTargets];
[filter release];
[gpuPicture release];
}
Upvotes: 0
Views: 395
Reputation: 170317
Why are you processing a movie as a series of still UIImages? Why are you allocating a new filter for each image? That's going to be incredibly slow.
Instead, if you need to process a movie file, use a GPUImageMovie input, and if you need access to the camera, use a GPUImageVideoCamera input. Both of these are tuned to provide fast video feeds through the filter pipeline. There's a lot of wasted processing cycles in converting still frames to and from UIImages.
Also, only set up a filter once, and reuse it as necessary. There's significant overhead in the creation of a GPUImageFilter (setting up an FBO, etc.), so you only want to create it once and then attach inputs and outputs as they change.
Upvotes: 1