Reputation: 2077
I am using the AVMetaData API to extract the bounds of an AVMetadataFaceObject. When printed to the console, this CGRect has the following values: bounds={0.2,0.3 0.4x0.5}
. I'm having a fair amount of trouble mapping this to a UIView that displays over the face. I can hard-code in some conversion values for my specific screen to get it to crudely be in the right spot, but I would like a solution that displays a UIView over the face shown in my previewView on any screen size.
Does anyone know how to map these to the frame of an on-screen UIView based upon the size of a previewView?
Upvotes: 0
Views: 252
Reputation: 1605
You should be able to take the size of the capture area, let's call that "captureSize" and then do this:
CGRect viewRect;
viewRect.origin.x = bounds.origin.x * captureSize.width;
viewRect.origin.y = bounds.origin.y * captureSize.height;
viewRect.size.width = bounds.size.width * captureSize.width;
viewRect.size.height = bounds.size.height * captureSize.height;
Now, this all depends how your previewView is setup and whether or not it has any content scaling, etc, but should give you a sense of the conversion.
Upvotes: 0