Reputation: 1
There are two different requests that you can use for face detection tasks with the iOS Vision Framework: VNDetectFaceLandmarksRequest
and VNDetectFaceRectanglesRequest
. Both of them return an array of VNFaceObservation
, one for each detected face. VNFaceObservation
has a variety of optional properties, including boundingBox
and landmarks
. The landmarks object then also includes optional properties like nose
, innerLips
, leftEye
, etc.
Do the two different Vision requests differ in how they perform face detection?
It seems that VNDetectFaceRectanglesRequest
only finds a bounding box (and maybe some other properties), but does not find any landmarks. On the other hand, VNDetectFaceLandmarksRequest
seems to find both, bounding box and landmarks.
Are there cases where one request type will find a face and the other one will not? Is VNDetectFaceLandmarksRequest
superior to VNDetectFaceRectanglesRequest
, or does the latter maybe have advantages in performance or reliability?
Here is an example code of how these two Vision requests can be used:
let faceLandmarkRequest = VNDetectFaceLandmarksRequest()
let faceRectangleRequest = VNDetectFaceRectanglesRequest()
let requestHandler = VNImageRequestHandler(ciImage: image, options: [:])
try requestHandler.perform([faceRectangleRequest, faceLandmarkRequest])
if let rectangleResults = faceRectangleRequest.results as? [VNFaceObservation] {
let boundingBox1 = rectangleResults.first?.boundingBox //this is an optional type
}
if let landmarkResults = faceLandmarkRequest.results as? [VNFaceObservation] {
let boundingBox2 = landmarkResults.first?.boundingBox //this is an optional type
let landmarks = landmarkResults.first?.landmarks //this is an optional type
}
Upvotes: 0
Views: 1526
Reputation: 21
VNDetectFaceRectanglesRequest
is a more lightweight operation for finding face rectangleVNDetectFaceLandmarksRequest
is a heavier operation, which can also help locate landmarks on the faceUpvotes: 2