Pasindu Jay
Pasindu Jay

Reputation: 4510

Swift IOS Record Video and Audio with AVFoundation

I was able to successfully grab the recorded video by following this question here

Basically

  1. Inherit from AVCaptureFileOutputRecordingDelegate prototype
  2. Loop through available devices
  3. Creating a session with the camera
  4. Start Recording
  5. Stop Recording
  6. Get the Record video by implementing above prototype's method

But the file doesn't comes with the audio.

According to this question, i have to record audio separately and merge the video and audio using mentioned classes

But i have no idea how to implement video and audio recording at the same time.

for device in devices {
    // Make sure this particular device supports video
    if (device.hasMediaType(AVMediaTypeVideo)) {
        // Finally check the position and confirm we've got the back camera
        if(device.position == AVCaptureDevicePosition.Back) {
            captureDevice = device as? AVCaptureDevice
            if captureDevice != nil {
                print("Capture device found")

                beginSession()
            }
        }
    }
}

in this loop only available device types are .Front and .Back

Upvotes: 8

Views: 27078

Answers (5)

Northern Captain
Northern Captain

Reputation: 1237

Followed the answer from @Mumu but it didn't work for me because of the call to AVCaptureDevice.DiscoverySession.init that was returning video devices only.

Here is my version that works on iOS 14, Swift 5:

var captureSession: AVCaptureSession? = nil
var camera: AVCaptureDevice? = nil
var microphone: AVCaptureDevice? = nil
var videoOutput: AVCaptureFileOutput? = nil
var previewLayer: AVCaptureVideoPreviewLayer? = nil

func findDevices() {
    camera = nil
    microphone = nil

    //Search for video media type and we need back camera only
    let session = AVCaptureDevice.DiscoverySession.init(deviceTypes:[.builtInWideAngleCamera],
            mediaType: AVMediaType.video, position: AVCaptureDevice.Position.back)
    var devices = (session.devices.compactMap{$0})
    //Search for microphone
    let asession = AVCaptureDevice.DiscoverySession.init(deviceTypes:[.builtInMicrophone],
            mediaType: AVMediaType.audio, position: AVCaptureDevice.Position.unspecified)
    //Combine all devices into one list
    devices.append(contentsOf: asession.devices.compactMap{$0})
    for device in devices {
        if device.position == .back {
            do {
                try device.lockForConfiguration()
                device.focusMode = .continuousAutoFocus
                device.flashMode = .off
                device.whiteBalanceMode = .continuousAutoWhiteBalance
                device.unlockForConfiguration()
                camera = device
            } catch {
            }
        }
        if device.hasMediaType(.audio) {
            microphone = device
        }
    }
}

func initVideoRecorder()->Bool {
    captureSession = AVCaptureSession()
    guard let captureSession = captureSession else {return false}

    captureSession.sessionPreset = .hd4K3840x2160
    findDevices()

    guard let camera = camera else { return false}
    do {
        let cameraInput = try AVCaptureDeviceInput(device: camera)
        captureSession.addInput(cameraInput)
    } catch {
        self.camera = nil
        return false
    }

    if let audio = microphone {
        do {
            let audioInput = try AVCaptureDeviceInput(device: audio)
            captureSession.addInput(audioInput)
        } catch {
        }
    }

    videoOutput = AVCaptureMovieFileOutput()
    if captureSession.canAddOutput(videoOutput!) {
        captureSession.addOutput(videoOutput!)
        captureSession.startRunning()
        videoOutput?.connection(with: .video)?.videoOrientation = .landscapeRight
        previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        previewLayer?.videoGravity = .resizeAspect
        previewLayer?.connection?.videoOrientation = .landscapeRight
        return true
    }

    return false
}

func startRecording()->Bool {
    guard let captureSession = captureSession, captureSession.isRunning else {return false}
    let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
    let fileUrl = paths[0].appendingPathComponent(getVideoName())
    try? FileManager.default.removeItem(at: fileUrl)
    videoOutput?.startRecording(to: fileUrl, recordingDelegate: self)
    return true
}

Upvotes: 2

Rubaiyat Jahan Mumu
Rubaiyat Jahan Mumu

Reputation: 4127

Following is the way to record video with audio using AVFoundation framework. The steps are:

1. Prepare the session:

self.captureSession = AVCaptureSession()

2. Prepare available video and audio devices:

let session = AVCaptureDevice.DiscoverySession.init(deviceTypes:[.builtInWideAngleCamera, .builtInMicrophone], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.unspecified)
        
let cameras = (session.devices.compactMap{$0})
        
for camera in cameras {
    if camera.position == .front {
        self.frontCamera = camera
    }
    if camera.position == .back {
        self.rearCamera = camera

        try camera.lockForConfiguration()
        camera.focusMode = .continuousAutoFocus
        camera.unlockForConfiguration()
    }
}

3. Prepare session inputs:

guard let captureSession = self.captureSession else {
    throw CameraControllerError.captureSessionIsMissing
}

if let rearCamera = self.rearCamera {
    self.rearCameraInput = try AVCaptureDeviceInput(device: rearCamera)
    if captureSession.canAddInput(self.rearCameraInput!) {
        captureSession.addInput(self.rearCameraInput!)
        self.currentCameraPosition = .rear
    } else {
        throw CameraControllerError.inputsAreInvalid
    }
} else if let frontCamera = self.frontCamera {
    self.frontCameraInput = try AVCaptureDeviceInput(device: frontCamera)
    if captureSession.canAddInput(self.frontCameraInput!) {
        captureSession.addInput(self.frontCameraInput!)
        self.currentCameraPosition = .front
    } else {
        throw CameraControllerError.inputsAreInvalid
    }
} else {
    throw CameraControllerError.noCamerasAvailable
}

// Add audio input
if let audioDevice = self.audioDevice {
    self.audioInput = try AVCaptureDeviceInput(device: audioDevice)
    if captureSession.canAddInput(self.audioInput!) {
        captureSession.addInput(self.audioInput!)
    } else {
        throw CameraControllerError.inputsAreInvalid
    }
}

4. Prepare output:

self.videoOutput = AVCaptureMovieFileOutput()
if captureSession.canAddOutput(self.videoOutput!) {
    captureSession.addOutput(self.videoOutput!)
}
captureSession.startRunning()

5. Start recording:

func recordVideo(completion: @escaping (URL?, Error?) -> Void) {
    guard let captureSession = self.captureSession, captureSession.isRunning else {
        completion(nil, CameraControllerError.captureSessionIsMissing)
        return
    }
    let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
    let fileUrl = paths[0].appendingPathComponent("output.mp4")
    try? FileManager.default.removeItem(at: fileUrl)
    videoOutput!.startRecording(to: fileUrl, recordingDelegate: self)
    self.videoRecordCompletionBlock = completion
}

6. Stop recording:

func stopRecording(completion: @escaping (Error?) -> Void) {
    guard let captureSession = self.captureSession, captureSession.isRunning else {
        completion(CameraControllerError.captureSessionIsMissing)
        return
    }
    self.videoOutput?.stopRecording()
}

7. Implement the delegate:

func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
    if error == nil {
        //do something
    } else {
        //do something
    }
}

I took idea from here: https://www.appcoda.com/avfoundation-swift-guide/

Here is the complete project https://github.com/rubaiyat6370/iOS-Tutorial/

Upvotes: 24

Esslushy
Esslushy

Reputation: 33

I had this problem also, but when I grouped adding the video input and the sound input after, the audio worked. This is my code for adding the inputs.

if (cameraSession.canAddInput(deviceInput) == true && cameraSession.canAddInput(audioDeviceInput) == true) {//detects if devices can be added
    cameraSession.addInput(deviceInput)//adds video
    cameraSession.addInput(audioDeviceInput)//adds audio
}

Also I found you have to have video input first or else there won't be audio. I originally had them in two if statements, but I found putting them in one lets video and audio be recorded together. Hope this helps.

Upvotes: 1

Santhosh
Santhosh

Reputation: 164

Record Video With Audio

//Get Video Device

if let devices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as? [AVCaptureDevice] {
    for device in devices {
        if device.hasMediaType(AVMediaTypeVideo) {
            if device.position == AVCaptureDevicePosition.back {
                videoCaptureDevice = device
            }
        }
    }
    if videoCaptureDevice != nil {
        do {
            // Add Video Input
            try self.captureSession.addInput(AVCaptureDeviceInput(device: videoCaptureDevice))
            // Get Audio Device
            let audioInput = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
            //Add Audio Input
            try self.captureSession.addInput(AVCaptureDeviceInput(device: audioInput))
            self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
            previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
            previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.portrait
            self.videoView.layer.addSublayer(self.previewLayer)
            //Add File Output
            self.captureSession.addOutput(self.movieOutput)
            captureSession.startRunning()
        } catch {
            print(error)
        }
    }
}

For more details refer this link:

https://medium.com/@santhosh3386/ios-avcapturesession-record-video-with-audio-23c8f8c9a8f8

Upvotes: 0

Pasindu Jay
Pasindu Jay

Reputation: 4510

Found the answer, This answer goes with this code

It can simply done by

  1. declare another capture device variable
  2. loop through devices and initialize camera and audio capture device variable
  3. add audio input to session

code

var captureDevice : AVCaptureDevice?
var captureAudio :AVCaptureDevice?

Loop through devices and Initialize capture devices

var captureDeviceVideoFound: Bool = false
var captureDeviceAudioFound:Bool = false

// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
    if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the front camera
        if(device.position == AVCaptureDevicePosition.Front) {

            captureDevice = device as? AVCaptureDevice //initialize video
            if captureDevice != nil {
                print("Capture device found")
                captureDeviceVideoFound = true; 
            }
        }
    }
    if(device.hasMediaType(AVMediaTypeAudio)){
        print("Capture device audio init")
        captureAudio = device as? AVCaptureDevice //initialize audio
        captureDeviceAudioFound = true
    }
}
if(captureDeviceAudioFound && captureDeviceVideoFound){
    beginSession() 
}

Inside Session

try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
try captureSession.addInput(AVCaptureDeviceInput(device: captureAudio))

This will output the video file with audio. no need to merge audio or do anything.

This apples documentation helps

Upvotes: 3

Related Questions