Reputation: 21
One question about Image Recognition in AR SDKs. Is it mandatory that the target images should be part of the app itself or can we have set of images in the app memory and perform on-device image recognition with it (the images might change or download when we click on a button each time in the app) ? note: The use case is only Image recognition and not the AR feature
Upvotes: 2
Views: 975
Reputation: 126127
You might have noticed that the class you use to load images from your app bundle and provide them to ARKit for detection is ARReferenceImage
.
Scroll down the docs page for that class and you’ll find, in addition to a method for loading reference images, two initializers for Creating Reference Images at runtime:
CGImage
-based initializer is good for cases where you’re loading image content from elsewhere, like fetching from the user’s Photos library or downloading from a server.CVPixelBuffer
-based initializer is good for cases where you have image content that’s already in GPU memory — say, if you wanted to extract a portion of ARKit’s capturedImage
for use in image detection.There’s one caveat to all this, though. When you put images in your asset catalog at build time, Xcode preflights them to make sure both that each individual image is good for detection and that the whole set of images are distinct enough from each other to be recognized reliably.
If you’re providing images dynamically, you don’t get that preflighting step, which creates design/interaction issues you’ll need to solve yourself:
Upvotes: 2