danielv
danielv

Reputation: 3117

AV Foundation: AVCaptureVideoPreviewLayer and frame duration

I am using the AV Foundation to process frames from the video camera (iPhone 4s, iOS 6.1.2). I am setting up AVCaptureSession, AVCaptureDeviceInput, AVCaptureVideoDataOutput per the AV Foundation programming guide. Everything works as expected and I am able to recieve frames in the captureOutput:didOutputSampleBuffer:fromConnection: delegate.

I also have a preview layer set like this:

AVCaptureVideoPreviewLayer *videoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_captureSession];
[videoPreviewLayer setFrame:self.view.bounds];
videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer insertSublayer:videoPreviewLayer atIndex:0];

Thing is, I don't need 30 frames per second in my frame handling and I am not able to process them so fast anyway. So I am using this code to limit the frame duration:

// videoOutput is AVCaptureVideoDataOutput set earlier
AVCaptureConnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
[conn setVideoMinFrameDuration:CMTimeMake(1, 10)];
[conn setVideoMaxFrameDuration:CMTimeMake(1, 2)];

This works fine and limits the frames recieved by the captureOutput delegate.

However, this also limits the frames per second on the preview layer and preview video becomes very unresponsive.

I understand from the documentation that the frame duration is set independently on the connection and the preview layer has indeed a different AVCaptureConnection. Checking the mix/max frame durations on [videoPreviewLayer connection] shows that it's indeed set to the defaults (1/30 and 1/24) and different than the durations set on the connection of the AVCaptureVideoDataOutput.

So, is it possible to limit the frame duration only on the frame capturing output and still see a 1/24-1/30 frame duration on the preview video? How?

Thanks.

Upvotes: 6

Views: 4655

Answers (3)

Wildaker
Wildaker

Reputation: 2533

While you're correct that there are two AVCaptureConnections, that doesn't mean they can have independently set the minimum and maximum frame durations. This is because they are sharing the same physical hardware.

If connection #1 is activating the rolling shutter at a rate of (say) five frames/sec with a frame duration of 1/5 sec, there is no way that connection #2 can simultaneously activate the shutter 30 times/sec with a frame duration of 1/30 sec.

To get the effect you want would require two cameras!

The only way to get close to what you want is to follow an approach along the lines of that outlined by Kaelin Colclasure in the answer of 22 March.

You do have options of being a little more sophisticated within that approach, however. For example, you can use a counter to decide which frames to drop, rather than making the thread sleep. You can make that counter respond to the actual frame-rate that's coming through (which you can get from the metadata that comes in to the captureOutput:didOutputSampleBuffer:fromConnection: delegate along with the image data, or which you can calculate yourself by manually timing the frames). You can even do a very reasonable imitation of a longer exposure by compositing frames rather than dropping them—just as a number of "slow shutter" apps in the App Store do (leaving aside details—such as differing rolling shutter artefacts—there's not really that much difference between one frame scanned at 1/5 sec and five frames each scanned at 1/25 sec and then glued together).

Yes, it's a bit of work, but you are trying to make one video camera behave like two, in real time—and that's never going to be easy.

Upvotes: 5

gWiz
gWiz

Reputation: 1284

Think of it this way: You ask the capture device to limit frame duration, so you get better exposure. Fine. You want to preview at higher frame rate. If you were to preview at higher rate, then the capture device (the camera) would NOT have enough time to expose the frame so you get better exposure at the captured frames. It is like asking to see different frames in preview than the ones captured.

I think that, if it was possible, it would also be a negative user experience.

Upvotes: 3

Kaelin Colclasure
Kaelin Colclasure

Reputation: 3975

I had the same issue for my Cocoa (Mac OS X) application. Here's how I solved it:

First, make sure to process the captured frames on a separate dispatch queue. Also make sure any frames you're not ready to process are discarded; this is the default, but I set the flag below anyway just to document that I'm depending on it.

    videoQueue = dispatch_queue_create("com.ohmware.LabCam.videoQueue", DISPATCH_QUEUE_SERIAL);
    videoOutput = [[AVCaptureVideoDataOutput alloc] init];
    [videoOutput setAlwaysDiscardsLateVideoFrames:YES];
    [videoOutput setSampleBufferDelegate:self
                                   queue:videoQueue];
    [session addOutput:videoOutput];

Then when processing the frames in the delegate, you can simply have the thread sleep for the desired time interval. Frames that the delegate is not awake to handle are quietly discarded. I implement the optional method for counting dropped frames below just as a sanity check; my application never logs dropping any frames using this technique.

- (void)captureOutput:(AVCaptureOutput *)captureOutput
  didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection;
{
    OSAtomicAdd64(1, &videoSampleBufferDropCount);
}

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection;
{
    int64_t savedSampleBufferDropCount = videoSampleBufferDropCount;
    if (savedSampleBufferDropCount && OSAtomicCompareAndSwap64(savedSampleBufferDropCount, 0, &videoSampleBufferDropCount)) {
        NSLog(@"Dropped %lld video sample buffers!!!", savedSampleBufferDropCount);
    }
    // NSLog(@"%s", __func__);
    @autoreleasepool {
        CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
        CIImage * cameraImage = [CIImage imageWithCVImageBuffer:imageBuffer];
        CIImage * faceImage = [self faceImage:cameraImage];
        dispatch_sync(dispatch_get_main_queue(), ^ {
            [_imageView setCIImage:faceImage];
        });
    }
    [NSThread sleepForTimeInterval:0.5]; // Only want ~2 frames/sec.
}

Hope this helps.

Upvotes: 1

Related Questions