Vinay
Vinay

Reputation: 6322

AVAssetWriter to Multiple Files

I have an AVCaptureSession that consists of an AVCaptureScreenInput and a AVCaptureDeviceInput. Both are hooked up as data output delegates, and I'm using an AVAssetWriter to write to a single MP4 file.

When writing to a single MP4 file, everything works. When I try to switch between multiple AVAssetWriters to save to successive files every 5 seconds, there is a slight audio drop when concatting all files together via FFMPEG.

Example joined video (notice the small audio drop every 5 seconds):

https://youtu.be/lrqD5dcbUXg

After lots of investigation, I've settled on that this is probably due to the audio and video segments getting split/not starting on the same timestamp.

I've now gotten to a point where I know my algorithm should work, but I'm at a loss on how to split an audio CMBufferSample. It seems like this might be useful CMSampleBufferCopySampleBufferForRange but not sure how to split based on a a time (want a buffer with all samples before and after that time).

func getBufferUpToTime(sample: CMSampleBuffer, to: CMTime) -> CMSampleBuffer {
  var numSamples = CMSampleBufferGetNumSamples(sample)
  var sout: CMSampleBuffer?

  let endSampleIndex = // how do I get this?

  CMSampleBufferCopySampleBufferForRange(nil, sample, CFRangeMake(0, numSamples), &sout)

  return sout!
}

Upvotes: 3

Views: 1447

Answers (1)

Gordon Childs
Gordon Childs

Reputation: 36072

If you're using AVCaptureScreenInput, then you're not on iOS, right? So I was going to write about splitting sample buffers, but then I remembered that on OSX, AVCaptureFileOutput.startRecording (not AVAssetWriter) has this tantalizing comment:

On Mac OS X, if this method is called within the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, the first samples written to the new file are guaranteed to be those contained in the sample buffer passed to that method.

Not dropping samples sounds pretty promising, so if you can live with mov instead of mp4 files, you should be able to get audio dropout free results by using AVCaptureMovieFileOutput, implementing fileOutputShouldProvideSampleAccurateRecordingStart and calling startRecording from didOutputSampleBuffer, like this:

import Cocoa
import AVFoundation

@NSApplicationMain
class AppDelegate: NSObject, NSApplicationDelegate {

    @IBOutlet weak var window: NSWindow!

    let session = AVCaptureSession()
    let movieFileOutput = AVCaptureMovieFileOutput()

    var movieChunkNumber = 0
    var chunkDuration = kCMTimeZero // TODO: synchronize access? probably fine.

    func startRecordingChunkFile() {
        let filename = String(format: "capture-%.2i.mov", movieChunkNumber)
        let url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent(filename)
        movieFileOutput.startRecording(to: url, recordingDelegate: self)

        movieChunkNumber += 1
    }

    func applicationDidFinishLaunching(_ aNotification: Notification) {
        let displayInput = AVCaptureScreenInput(displayID: CGMainDisplayID())

        let micInput = try! AVCaptureDeviceInput(device: AVCaptureDevice.default(for: .audio)!)

        session.addInput(displayInput)
        session.addInput(micInput)

        movieFileOutput.delegate = self

        session.addOutput(movieFileOutput)

        session.startRunning()

        self.startRecordingChunkFile()
    }
}

extension AppDelegate: AVCaptureFileOutputRecordingDelegate {
    func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
        // NSLog("error \(error)")
    }
}

extension AppDelegate: AVCaptureFileOutputDelegate {
    func fileOutputShouldProvideSampleAccurateRecordingStart(_ output: AVCaptureFileOutput) -> Bool {
        return true
    }

    func fileOutput(_ output: AVCaptureFileOutput, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        if let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) {
            if CMFormatDescriptionGetMediaType(formatDescription) == kCMMediaType_Audio {
                let duration = CMSampleBufferGetDuration(sampleBuffer)
                chunkDuration = CMTimeAdd(chunkDuration, duration)

                if CMTimeGetSeconds(chunkDuration) >= 5 {
                    startRecordingChunkFile()
                    chunkDuration = kCMTimeZero
                }
            }
        }
    }
}

Upvotes: 2

Related Questions