Shahriyar
Shahriyar

Reputation: 698

How to apply a 3D Model on detected face by Apple Vision "NO AR"

With iPhoneX True-Depth camera its possible to get the 3D Coordinates of any object and use that information to position and scale the object, but with older iPhones we don't have access to AR on front-face camera, what i've done so far was detecting the face using Apple Vison frame work and drawing some 2D paths around the face or landmarks. i've made a SceneView and Applied that as front layer of My view with clear background, and beneath it is AVCaptureVideoPreviewLayer , after detecting the face my 3D Object appears on the screen but positioning and scaling it correctly according to the face boundingBox required unprojecting and other stuffs which i got stuck there, i've also tried converting the 2D BoundingBox to 3D using CATransform3D but i failed! i am wondering if what i want to achieve is even possible ? i remember SnapChat was doing this before ARKit was available on iPhone if i'm not wrong!

Imgur

    override func viewDidLoad() {
        super.viewDidLoad()
        self.view.addSubview(self.sceneView)

        self.sceneView.frame = self.view.bounds
        self.sceneView.backgroundColor = .clear
        self.node = self.scene.rootNode.childNode(withName: "face", 
        recursively: true)!

    }

    fileprivate func updateFaceView(for result: 
    VNFaceObservation, twoDFace: Face2D) {
        let box = convert(rect: result.boundingBox)
        defer {
            DispatchQueue.main.async {
                self.faceView.setNeedsDisplay()
            }
        }

        faceView.boundingBox = box
        self.sceneView.scene?.rootNode.addChildNode(self.node)

        let unprojectedBox = SCNVector3(box.origin.x, box.origin.y, 
        0.8)

        let worldPoint = sceneView.unprojectPoint(unprojectedBox)

         self.node.position = worldPoint 
        /* Here i have to to unprojecting 
         to convert the value from a 2D point to 3D point also 
         issue here. */
    }

Upvotes: 2

Views: 2694

Answers (2)

Diego Meire
Diego Meire

Reputation: 11

The only way to achieve this is to SceneKit with an ortographic camera and use SCNGeometrySource to match the landmarks from Vision to the vertices of the mesh. First, you need the mesh with the same number of vertices of Vision (66 - 77 depending on which Vision Revision you're in). You can create one using a tool like Blender.

The mesh on Blender

Then, on code, on each time you process your landmarks, you do the steps: 1- Get the mesh vertices:

func getVertices() -> [SCNVector3]{
        var result = [SCNVector3]()
        let planeSources = shape!.geometry?.sources(for: SCNGeometrySource.Semantic.vertex)
        if let planeSource = planeSources?.first {
            let stride = planeSource.dataStride
            let offset = planeSource.dataOffset
            let componentsPerVector = planeSource.componentsPerVector
            let bytesPerVector = componentsPerVector * planeSource.bytesPerComponent
            let vectors = [SCNVector3](repeating: SCNVector3Zero, count: planeSource.vectorCount)
            //   [SCNVector3](count: planeSource.vectorCount, repeatedValue: SCNVector3Zero)
            let vertices = vectors.enumerated().map({
                (index: Int, element: SCNVector3) -> SCNVector3 in
                var vectorData = [Float](repeating: 0, count: componentsPerVector)
                let byteRange = NSMakeRange(index * stride + offset, bytesPerVector)

                let data = planeSource.data
                (data as NSData).getBytes(&vectorData, range: byteRange)
                return SCNVector3( x: vectorData[0], y: vectorData[1], z: vectorData[2])
            })

            result = vertices

        }

        return result
    }

2- Unproject each landmark captured by Vision and keep them in a SCNVector3 array:

let unprojectedLandmark = sceneView.unprojectPoint( SCNVector3(landmarks[i].x + (landmarks[i].x,landmarks[i].y,0))

3- Modify the geometry using the new vertices:

func reshapeGeometry( _ vertices: [SCNVector3] ){

let source = SCNGeometrySource(vertices: vertices)

var newSources = [SCNGeometrySource]()
newSources.append(source)

for source in shape!.geometry!.sources {
    if (source.semantic != SCNGeometrySource.Semantic.vertex) {
        newSources.append(source)
    }
}

let geometry = SCNGeometry(sources: newSources, elements: shape!.geometry?.elements)

let material = shape!.geometry?.firstMaterial
shape!.geometry = geometry
shape!.geometry?.firstMaterial = material

}

I was able to do that and that was my method. Hope this helps!

Upvotes: 1

Jules Burt
Jules Burt

Reputation: 125

I would suggest looking at Google's AR Core products which support an Apple AR scene with the back or front facing camera...but adds some additional functionality beyond Apple, when it comes to non Face depth camera devices.

Apple's Core Vision is almost the same as Googles Core Vision framework which returns 2D points representing the eyes/mouth/nose etc...and a face tilt component.

However, if you want a way to simply apply either 2D textures to a responsive 3D face, or alternatively attach 3D models to points on the face then take a look at Google's Core Augmented Faces framework. It has great sample code on iOS and Android.

Upvotes: 0

Related Questions