Reputation: 314
My idea is to take a point cloud (e.g. a xyz file) and create a ARReferenceObject
out of it and use that to do object detection. So instead of scanning the object first and use the created object reference I want use my own point cloud to do object detection with ARKit 2.0.
The Apple documentation has something on rawFeaturePoints
which is a ARPointCloud
. I saw that ARPointCloud
has a property called points
which is a vector_float3
array which is read only unfortunately. I could not find a way of creating the ARReferenceObject
manually so I tried the source code from the example Scanning and Detecting 3D Objects.
I scanned a 3D object and exported the generated .arobject
file which is a zip archive. After unpacking I tinkered with the trackingData.cv3dmap
but gave up. Looks like a proprietary file format and I'm not that much into reverse engineering the format.
Now my question would be if thereis another solution to create either the .arobject
files or the ARReferenceObject
from my own point cloud? Or perhaps there is a totally better way to do object detection based on a already available point cloud.
Upvotes: 4
Views: 1430
Reputation: 126167
Nope.
Per Apple engineers at WWDC18, object scanning is about much more than just the feature points. ARReferenceObject
exposes a feature point array in order to provide a representation of the scan results that you can visualize and reason about, but that’s just a slice of the data ARKit saves in a reference object and uses to recognize one. And as far as Apple has indicated publicly, that data and the means to generate it remain proprietary.
(Also, there’s no practical difference between creating an ARReferenceObject
and creating an .arobject
file — the latter is essentially the serialized binary version of the former.)
Upvotes: 2