Reputation: 203
I am developing an app that does facial recognition on the Mac and I am using a QTCaptureSession with QTCaptureDecompressedVideoOutput. I am limiting the video resolution to 640x360, using the 32ARGB pixel format, and setting the minimum video frame interval to 0 to improve framerate but that didn't really help.
QTCaptureDecompressedVideoOutput *output = [[QTCaptureDecompressedVideoOutput alloc] init];
[output setPixelBufferAttributes:[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:640], (id)kCVPixelBufferWidthKey,
[NSNumber numberWithDouble:360], (id)kCVPixelBufferHeightKey,
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferPixelFormatTypeKey,
nil]];
output.minimumVideoFrameInterval = 0;
No matter what I do, the framerate seems to peak around 15-15.5 FPS. This is using the built-in camera on a very recent MacBook Pro 15" / 2.3GHz Core i7, running 10.7.3.
Upvotes: 1
Views: 566
Reputation: 4623
Usually Built-in iSight camera produces huge frames. The buffer format you specify does not really affect the capture input, but only the specific capture output. Moreover, since the original captured images are not in the specified pixel formats, format conversion will be performed. So, my guess is that by specifying another pixel buffer format you only slow the processing down.
minimumVideoFrameInterval
is 0 by default, you do not need to change it if you seek the maximum frame-rate. The frame-rate you have is the maximum possible at this time.
Try to avoid specifying the pixel buffer and see if there is any difference in FPS. Also, I would use another external camera with less resolution, which would dramatically reduce the load on the system.
Upvotes: 1