e.iluf
e.iluf

Reputation: 1659

use of unresolved identifier

The input has been be defined as try AVCaptureDeviceInput(device: captureDevice) but it still says input is an unresolved identifier. Please see my code below, I have tried multiple methods but not success.

import UIKit
import AVFoundation

class ViewController: UIViewController, AVCaptureMetadataOutputObjectsDelegate {

    var captureSession: AVCaptureSession?
    var videoPreviewLayer: AVCaptureVideoPreviewLayer?
    var qrCodeFrameView: UIView?

    override func viewDidLoad() {
        super.viewDidLoad()
        // Get an instance of AVCaptureDevice class to initialize a device object and provide the video as the media type parameter.

        let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)

        // Get an instance of the AVCaptureDeviceInput class using the previous device object.
        do {
            let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
            let input = try AVCaptureDeviceInput(device: captureDevice)
            // Do the rest of your work...
        } catch let error as NSError {
            // Handle any errors
            print(error)
        }

        // Initialize the captureSession object
        captureSession = AVCaptureSession()
        captureSession?.addInput(input as! AVCaptureInput)

        // Set the input device on the capture session.


        // Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session. 
        let captureMetadataOuput = AVCaptureMetadataOutput()
        captureSession?.addOutput(captureMetadataOuput)

        // Set delegate and use the default dispatch queue to execute the call back 
        captureMetadataOuput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
        captureMetadataOuput.metadataObjectTypes = [AVMetadataObjectTypeQRCode]

        // Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
        videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
        videoPreviewLayer?.frame = view.layer.bounds
        view.layer.addSublayer(videoPreviewLayer!)

        // Start video capture
        captureSession?.startRunning()



    }

    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
        // Dispose of any resources that can be recreated.
    }
}

How do I fix this?

Upvotes: 2

Views: 4212

Answers (3)

Martin R
Martin R

Reputation: 539685

As already explained in the other answers, your input variable is limited to the scope of the do block.

An alternative solution – if you want to keep the do/catch blocks smaller and localized – is to declare the variable outside of the block:

    let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
    let input: AVCaptureDeviceInput
    do {
        input = try AVCaptureDeviceInput(device: captureDevice)
    } catch let error as NSError {
        print(error)
        return // Must return from method here ...
    }

    // `input` is defined and initialized now ...
    captureSession = AVCaptureSession()
    captureSession?.addInput(input)
    // ...

Note that this requires that your return immediately in the error case, as input would be undefined then.

Or, if the error message is not important, use try? in a guard statement:

    let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
    guard let input = try? AVCaptureDeviceInput(device: captureDevice) else {
        return
    }

    // `input` is defined and initialized now ...
    captureSession = AVCaptureSession()
    captureSession?.addInput(input)
    // ...

Upvotes: 1

vadian
vadian

Reputation: 285059

input is in the scope of the do block, it's not visible outside.

Basically it's a very bad idea to just print the error and continue as if nothing happened. Always put the entire good code into the do block:

do {
  let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
  let input = try AVCaptureDeviceInput(device: captureDevice)
  // Initialize the captureSession object
  captureSession = AVCaptureSession()
  captureSession?.addInput(input as! AVCaptureInput)

  // Initialize a AVCaptureMetadataOutput object and set it as the output 

 ...

  // Start video capture
  captureSession?.startRunning()

  // Do the rest of your work...
} catch let error as NSError {
  // Handle any errors
  print(error)
}

Upvotes: 0

rmaddy
rmaddy

Reputation: 318774

It's a scope issue. Your captureDevice and input constants are only usable inside the do block. Update your code to something like this:

override func viewDidLoad() {
    super.viewDidLoad()
    // Get an instance of AVCaptureDevice class to initialize a device object and provide the video as the media type parameter.

    let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)

    // Get an instance of the AVCaptureDeviceInput class using the previous device object.
    do {
        let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
        let input = try AVCaptureDeviceInput(device: captureDevice)

        // Initialize the captureSession object
        captureSession = AVCaptureSession()
        captureSession?.addInput(input as! AVCaptureInput)

        // Set the input device on the capture session.


        // Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session. 
        let captureMetadataOuput = AVCaptureMetadataOutput()
        captureSession?.addOutput(captureMetadataOuput)

        // Set delegate and use the default dispatch queue to execute the call back 
        captureMetadataOuput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
        captureMetadataOuput.metadataObjectTypes = [AVMetadataObjectTypeQRCode]

        // Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
        videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
        videoPreviewLayer?.frame = view.layer.bounds
        view.layer.addSublayer(videoPreviewLayer!)

        // Start video capture
        captureSession?.startRunning()
    } catch let error as NSError {
        // Handle any errors
        print(error)
    }
}

Upvotes: 0

Related Questions