Amit
Amit

Reputation: 283

How to convert normalized points retrived from VNFaceLandmarkRegion2D

I am trying to detect face landmarks using VNDetectFaceLandmarksRequest which gives me array of VNFaceObservation, which provides me normalised points for landmark, but these normalize points are having coordinate system of image captured using camera, I want to convert each point to screen coordinate system.

How can I do that?

Upvotes: 0

Views: 1761

Answers (1)

rickster
rickster

Reputation: 126107

Vision doesn't know anything about the screen coordinate system because Vision doesn't display anything on the screen. It's not too hard to get there once you have pixel coordinates relative to the image, though.

To get points from normalized face space to image pixel space, use the VNImagePointForFaceLandmarkPoint function (whose docs tell you exactly where to get the values for each parameter when dealing with a VNFaceObservation).

To find the corresponding screen point for a point in the image, you'll need to do some coordinate conversions having to do with however you're presenting the image onscreen. You can find some examples of this in the sample code projects in Apple's Vision docs.

Upvotes: 2

Related Questions