Venkat
Venkat

Reputation: 21

marker detection and 3d model display for corresponding markers in ios(using ARkit)

Previously i have used vuforia(unity) to developed an AR app for iOS. Now i have to implement the same app using ARKit.

ARKit is awesome except there is no marker detection.

I have tried to use vision to detect markers and not successful so far. can i have some samples for marker detection and displaying 3d models on the markers for iOS ?

Thanks in Advance.

Upvotes: 2

Views: 791

Answers (1)

PongBongoSaurus
PongBongoSaurus

Reputation: 7385

There are a number of ways to achieve what your are looking for, although arguably the most simple is using images are markers.

As of ARKit 1.5 you are able to use ReferenceImages to place AR Content which are essentially the same as the markers you would use in Vuforia or EasyAR.

For your information a referenceImage is simply:

An image to be recognized in the real-world environment during a world-tracking AR session.

To make use of this function you need to pass in a :

collection of reference images to your session configuration's detectionImages property.

Which can be set like so:

var detectionImages: Set<ARReferenceImage>! { get set }

An important thing to note with ARKit 1.5 is that unlike Vuforia which can allow extended tracking of images:

Image detection doesn't continuously track real-world movement of the image or track when the image disappears from view. Image detection works best for cases where AR content responds to static images in the scene—for example, identifying art in a museum or adding animated elements to a movie poster.

As @Alexander said, your best bet for learning how this may be appropriate to your situation is looking at the SampleCode and Documentation online, which is available here:

Recognizing Images In An AR Experience

The core points however are these:

To Enable Image Detection:

You need to first provide one or more ARReferenceImage resources. These can be added manually using the AR asset catalog in Xcode, remembering that your must enter the physical size of the image in Xcode as accurately as possible since ARKit relies on this information to determine the distance of the image from the camera.

ARReferenceImages can also be created on the fly using the following methods:

init(CGImage, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)

Which creates a new reference image from a Core Graphics image object.

init(CVPixelBuffer, orientation: CGImagePropertyOrientation, physicalWidth: CGFloat)

Which creates a new reference image from a Core Video pixel buffer.

Having done this, you then need to create a world tracking configuration in which you pass in your ARReferenceImages before running your ARSession e.g:

guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else { return }

let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = referenceImages
session.run(configuration, options: [.resetTracking, .removeExistingAnchors])

Handling Detection Of Images:

When an ARReferenceImage is detected by your ARSession and ARImageAnchor is created which simply provides:

Information about the position and orientation of an image detected in a world-tracking AR session.

If image detection is successful therefore, you will need to use the following ARSCNViewDelegate callback to handle placement of your objects etc:

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { }

An example of using this to handle placement of your 3D content would be like so:

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {

        //1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
        guard let currentImageAnchor = anchor as? ARImageAnchor else { return }

        //2. Get The Targets Name
        let name = currentImageAnchor.referenceImage.name!

        //3. Get The Targets Width & Height
        let width = currentImageAnchor.referenceImage.physicalSize.width
        let height = currentImageAnchor.referenceImage.physicalSize.height

        //4. Log The Reference Images Information
        print("""
            Image Name = \(name)
            Image Width = \(width)
            Image Height = \(height)
            """)

        //5. Create A Plane Geometry To Cover The ARImageAnchor
        let planeNode = SCNNode()
        let planeGeometry = SCNPlane(width: width, height: height)
        planeGeometry.firstMaterial?.diffuse.contents = UIColor.white
        planeNode.opacity = 0.25
        planeNode.geometry = planeGeometry

        //6. Rotate The PlaneNode To Horizontal
        planeNode.eulerAngles.x = -.pi/2

        //7. The Node Is Centered In The Anchor (0,0,0)
        node.addChildNode(planeNode)

        //8. Create AN SCNBox
        let boxNode = SCNNode()
        let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)

        //9. Create A Different Colour For Each Face
        let faceColours = [UIColor.red, UIColor.green, UIColor.blue, UIColor.cyan, UIColor.yellow, UIColor.gray]
        var faceMaterials = [SCNMaterial]()

        //10. Apply It To Each Face
        for face in 0 ..< 5{
            let material = SCNMaterial()
            material.diffuse.contents = faceColours[face]
            faceMaterials.append(material)
        }
        boxGeometry.materials = faceMaterials
        boxNode.geometry = boxGeometry

        //11. Set The Boxes Position To Be Placed On The Plane (node.x + box.height)
        boxNode.position = SCNVector3(0 , 0.05, 0)

        //12. Add The Box To The Node
        node.addChildNode(boxNode)
    }

Hope it helps...

Upvotes: 1

Related Questions