Reputation: 1639
I'm going to explain my current project and what I want to do.
Current project: I have an iOS App that is currently recording a video and saving it to the disk. I'm using the Avfoundation libraries by apple to record and show the capture screen on the device.
I want to do:
I want to keep the current functionality adding webrtc. The problem is webrtc project is already using AVCaptureSession and You can't have two sessions on the same app.
I was asking about this, but seems to be complicated. Someone told me about write a subclass of cricket::VideoCapturer but I'm not sure if I need to rewrite every class behind this in C++. Also i was seeing the AvCapturesession is wrote in rtc_video_capturer_ios.h but I don't understand how can I pass my AVCaptureSession to this class from my current project.
Does anyone have an example of this? I need an orientation.
Thanks so much for your help.
Upvotes: 4
Views: 2232
Reputation: 2505
If you're using the Google WebRTC library there is a way of doing this, but I haven't yet got a fully stable solution. I found the information here https://groups.google.com/forum/?fromgroups=&hl=sv#!topic/discuss-webrtc/8TgRy9YWvVc and was able to implement something similar in my code.
Look into the RTCAVFoundationVideoSource, it contains a captureSession which you can use that represents the AVCaptureSession.
Even if you're NOT using Google's code (I see you reference cricket?) you should be able to do something similar.
Then you can try something like this:
for output in sourceAVFoundation.captureSession.outputs {
if let videoOutput = output as? AVCaptureVideoDataOutput {
self.videoOutput = videoOutput
NSLog("+++ FOUND A VIDEO OUTPUT: \(videoOutput) -> \(videoOutput.sampleBufferDelegate)")
externalVideoBufferDelegate = videoOutput.sampleBufferDelegate
videoOutput.setSampleBufferDelegate(self, queue: videoBufferDelegateQueue)
}
}
Find the output, save their reference to the videoBufferDelegate (i.e. where WebRTC is sending videoBuffers), then add your own.
When you implement AVCaptureVideoDataOutputSampleBufferDelegate to process (write) the buffers, you need to implement something like this:
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
externalVideoBufferDelegate?.captureOutput!(captureOutput, didOutputSampleBuffer: sampleBuffer, fromConnection: connection)
dispatch_async(videoQueue) {
if self.assetWriterVideoInput.readyForMoreMediaData {
self.assetWriterVideoInput.appendSampleBuffer(sampleBuffer)
}
}
}
Do whatever you want to do with the buffers, but the important part is forwarding the buffer to the externalVideoBufferDelegate you kept the reference to earlier - this lets WebRTC continue to process and forward the video frame.
Upvotes: 2