KellysOnTop23
KellysOnTop23

Reputation: 1435

face detection with AVfoundation swift 2

So i have successfully set up a little project that uses AV foundation to start the back or front camera with the push of a button. Now that i have control of the camera I wanted to implement the face detection to a level where I can map out, on the preview of the camera, where that face is.

The tutorials i have seen online have used the AVCaptureMetadataOutputObjectsDelegate and the AVCaptureVideoDataOutputSampleBufferDelegate and the main tutorial i am trying to follow uses a GLkit to take control of the live stream. I can't follow it because their site's code doesn't match their github and i have too many questions and lose ends.

Can anyone help me set up face detection to a level where i can map it on the screen with AVfoundation base? or point me to a good place to learn to accomplish this?

Upvotes: 1

Views: 2668

Answers (1)

xue
xue

Reputation: 2475

Great, you already know about the Capture Session stuff, now you need to do is use AVCaptureMetadataOutputObjectsDelegate.

Implement its func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [AnyObject]!, from connection: AVCaptureConnection!) method.

You code may looks like this:

func captureOutput(captureOutput: AVCaptureOutput!, didOutputMetadataObjects metadataObjects: [AnyObject]!, fromConnection connection: AVCaptureConnection!) {

    var faces = [CGRect]()

    for metadataObject in metadataObjects as! [AVMetadataObject] {
        if metadataObject.type == AVMetadataObjectTypeFace {
                let transformedMetadataObject = previewLayer.transformedMetadataObjectForMetadataObject(metadataObject)
                let face = transformedMetadataObject.bounds
                faces.append(face)
        }
    }

    print("FACE",faces)
    }

Once you found the faces, you may use other layer to draw rectangles around faces or do other things.

Here's a demo you can refer to.

Upvotes: 1

Related Questions