Reputation: 1956
I am trying to place a square rectangle over a user's face which I am recognizing using CIFaceFeature
in real time over a full screen (self.view.frame
) video feed. However, the coordinates I am getting from CIFaceFeature.bounds
are from a different coordinate system than that used by views. I have tried converting these coordinates from this and other examples. But since I don't I am running this atop a video feed I don't have an image I can pass into CIImage
to ease with the conversion of coordinates. Below is an example of my configuration, any idea how I can convert to a usable CGRect
?
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let opaqueBuffer = Unmanaged<CVImageBuffer>.passUnretained(imageBuffer!).toOpaque()
let pixelBuffer = Unmanaged<CVPixelBuffer>.fromOpaque(opaqueBuffer).takeUnretainedValue()
let sourceImage = CIImage(cvPixelBuffer: pixelBuffer)
let features = self.faceDetector!.features(in: sourceImage, options: options)
if (features.count != 0) {
let faceImage = sourceImage
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])
let faces = faceDetector?.features(in: faceImage) as! [CIFaceFeature]
let transformScale = CGAffineTransform(scaleX: 1, y: -1)
let transform = transformScale.translatedBy(x: 0, y: -faceImage.extent.height)
for feature in features as! [CIFaceFeature] {
faceBounds = feature.bounds
var fb = faceBounds?.applying(transform)
// imageViewSize is the screen frame
let scale = min(imageViewSize.width / fb!.width,
imageViewSize.height / fb!.height)
let dx = (imageViewSize.width - fb!.width * scale) / 2
let dy = (imageViewSize.height - fb!.height * scale) / 2
fb?.applying(CGAffineTransform(scaleX: scale, y: scale))
fb?.origin.x += dx
fb?.origin.y += dy
realFaceRect = fb // COMPLETELY WRONG :'(
}
}
Upvotes: 0
Views: 768
Reputation: 1956
If anyone runs into the same problem. Here's an easy solution
let imgHeight = CGFloat(CVPixelBufferGetHeight(pixelBuffer))
let ratio = self.view.frame.width / self.visage!.imgHeight
func convertFrame(frame: CGRect, ratio: CGFloat) -> CGRect {
let x = frame.origin.y * ratio
let y = frame.origin.x * ratio
let width = frame.height * ratio
let height = frame.width * ratio
return CGRect(x: x, y: y, width: width, height: height)
}
func convertPoint(point: CGPoint, ratio: CGFloat) -> CGPoint {
let x = point.y * ratio
let y = point.x * ratio
return CGPoint(x: x, y: y)
}
Upvotes: 0