Xys
Xys

Reputation: 10899

Why do we need to pause the thread after appendPixelBuffer:withPresentationTime:?

When encoding a video on iOS, most of the solutions include this step :

while(encoding) {
    if(assetWriterInput.readyForMoreMediaData) {
        [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
        if(buffer)
            CVBufferRelease(buffer);
        [NSThread sleepForTimeInterval:0.05]; // <=== This line slows down encoding
    }
} 

If I don't sleep the thread, the result video will look jerky, even if readyForMoreMediaData always return YES. If I pause the thread, the result looks perfect.

But I don't understand the purpose of "readyForMoreMediaData" if we need to pause the thread anyway ? It looks like I can reduce the sleep time to 0.03 without the result looking jerky though, but it's still slowing down a lot the encoding process.

Any help would be appreciated, thanks !

Upvotes: 0

Views: 380

Answers (1)

Sten
Sten

Reputation: 3864

I have used the assetwriter to write real time video in several apps for several years, including one that runs at 240 fps as a standard. I have had not problems with jerky video. I have never used any sleep commands and no CVBufferRelease. My code essentially looks like this:

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer  fromConnection:(AVCaptureConnection *)connection
{
  if (videoWriterInput.readyForMoreMediaData) [videoWriterInput appendSampleBuffer:sampleBuffer]) 

}

Maybe you should check how you set up your assetwriter? There is a "recommendedVideoSettingsForAssetWriterWithOutputFileType" setting that can help you optimize it. I would try without the adaptor if you don't absolutely need it, in my experience it runs smoother without.

Upvotes: 0

Related Questions