Reputation: 347
I want to build a demo app in ARKit and I have some questions about what is currently possible with the beta (Apple has been calling this RealityKit, or ARKit 3.0).
The demo app I'm trying to build should do the following:
identify an object or image in the real environment, and create an anchor there
render a virtual model attached to the anchor
have the virtual model presented with occlusion
have the virtual model move along with the anchor image / object
I've tried adapting some code from earlier versions (ARKit 2.0 which leverages SceneKit), but certain features like people occlusion are not part of ARKit 2.0.
As Apple has been iterating on their beta, a lot of features advertised on their site and at WWDC 2019 have seemingly disappeared from the documentation for RealityKit (people occlusion, body tracking, world tracking).
The way I understand it, items (1) and (2) are possible with ARKit 2.0. Item (3) is advertised as possible with the beta, but I see little to no documentation.
Is this possible to do in the latest beta? If so, what is the best approach? If not, are there any workarounds like mixing the old and new APIs or something?
Upvotes: 5
Views: 2878
Reputation: 58093
All the challenges you mentioned, are accessible in ARKit
/SceneKit
and ARKit
/RealityKit
.
- Identify an object or image in the real environment, and create an anchor there.
You're able to identify 3D objects
or Images
using the following configs in ARKit:
let configuration = ARWorldTrackingConfiguration()
guard let obj = ARReferenceObject.referenceObjects(inGroupNamed: "Resources",
bundle: nil)
else { return }
configuration.detectionObjects = obj // Allows you create ARObjectAnchor
sceneView.session.run(configuration)
vs
let config = ARWorldTrackingConfiguration()
guard let img = ARReferenceImage.referenceImages(inGroupNamed: "Resources",
bundle: nil)
else { return }
config.detectionImages = img // Allows you create ARImageAnchor
config.maximumNumberOfTrackedImages = 3
sceneView.session.run(config)
However, if you want to implement a similar behaviour in RealityKit use this:
let objectAnchor = AnchorEntity(.object(group: "Resources", name: "object"))
or
let imageAnchor = AnchorEntity(.image(group: "Resources", name: "model"))
- Render a virtual model attached to the anchor.
At the moment ARKit has four companions helping you render 3D and 2D graphics:
- Have the virtual model presented with occlusion.
In RealityKit module all the materials are structures that conform to Material protocol. At the moment there are 6 types of materials. You need OcclusionMaterial
.
Look at THIS POST to find out how to assign materials programmatically in RealityKit.
And THIS POST shows you how to assign custom occlusion material in SceneKit.
- Have a virtual model move along with an image/object anchor.
To implement this type of behavior in ARKit
+SceneKit
you have to use renderer(_:didAdd:for:) or session(_:didAdd:) methods. In RealityKit AnchorEntities are tracked automatically.
Here's an example of using ARObjectAnchor in renderer(_:didAdd:for:)
instance method:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
if let _ = anchor as? ARObjectAnchor {
let text = SCNText(string: "ARKit", extrusionDepth: 0.5)
let textNode = SCNNode(geometry: text)
node.addChildNode(textNode)
}
}
}
Upvotes: 6