Reputation: 5439
ARFaceGeometry has an initialization method for an array of blendShapes, but how would one create this object with an array of ARFaceGeometry vertices?
In Apple's Creating Face-Based AR Experiences the ViewController is passed an ARFaceTrackingConfiguration instance and so the ARSession appears to create an ARFaceAnchor instance and keep it updated with the face tracked by the TrueDepth Camera. This can be clearly seen in the ARSession's delegate's renderer method renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor)
in the VirtualContentUpdater.
As this ARAnchor's ARFaceGeometry is successfully updating to match the current state of the face via virtualFaceNode?.update(withFaceAnchor: faceAnchor)
in VirtualContentUpdater and through faceGeometry.update(from: anchor.geometry)
in the case of the Mask as the geometry, it must be true that somewhere behind the scenes an ARFaceGeometry instance is getting created or updated from higher resolution data (TD Camera) than blendShapes provide.
Do you know how this is occurring and how I might do this myself, and if not, do you know how I might find the code behind the scenes to dig through and discover how it's being done to use? Is utilizing such non-public parts of the iOS libraries viable?
Sorry, I'm extremely new to swift and the iOS development ecosystem, so I'm not sure where/how to even find the pertinent code, or if it's available. Any thoughts or help is greatly appreciated, thanks so much!
Upvotes: 0
Views: 2421
Reputation: 126107
Judging by your comments on @mnuages’ answer, it sounds like your question isn’t really about manipulating ARSCNFaceGeometry
— it’s about the deeper issue of trying to send face geometry data obtained on one device over to another device and then render it (using SceneKit).
There are two good directions to look in for solving this issue:
You’ve assumed that transmitting blendShapes won’t give you the result you’re looking for, but have you tried it? In my experience, pulling the faceGeometry
from an ARFaceAnchor
versus pulling the anchor’s blendShapes
and then using them to create a fresh ARFaceGeometry
yield nearly the same result.
ARSCNFaceGeometry
doesn’t have a way to be initialized from “raw” vertex data. But its superclass SCNGeometry
does:
Beforehand, create SCNGeometrySource
and SCNGeometryElement
instances for the parts of ARFaceGeometry
’s data that the documentation notes are static: the textureCoordinates
and triangleIndices
buffers, respectively.
When you get a new face anchor from ARKit, create an SCNGeometrySource
from its vertices
data. Then create an SCNGeometry
using that vertex source and the texture coordinate source and geometry element you made beforehand.
Set the new geometry on a node and you’re ready to render.
(There are probably more efficient ways to schlep this vertex data around in SceneKit, but this should be enough to get you to your first visible result, at least. Also, sorry... no code since I’m writing on iPad at the moment — but check the mentioned symbols in the docs and the rest should be clear.)
Upvotes: 1
Reputation: 13462
ARFaceGeometry
has a vertices
property and according to the documentation
Only the vertices buffer changes between face meshes provided by an AR session, indicating the change in vertex positions as ARKit adapts the mesh to the shape and expression of the user's face.
In this case blend shape coefficients are not useful to you. When the delegation methods are called the ARFaceGeometry
has already been mutated according the ARFaceAnchor
internal state. How it's done is completely internal to ARKit and not publicly exposed.
The position of the vertices of the ARFaceGeometry
are then just used to update the position of the vertices of the ARSCNFaceGeometry
which is a subclass of SCNGeometry
.
Upvotes: 2