Reputation: 578
I'm modifying the Apple SquareCam example face detection app so that it crops the face out before writing to the camera roll instead of drawing the red square around the face. I'm using the same CGRect to do the cropping as was used for drawing the red square. However the behavior is different. In portrait mode, if the face is in the horizontal center of the screen, it crops the face as expected (the same place the red square would have been). If the face is off to the left or right, the crop seems to always be taken from the middle of the screen instead of where the red square would have been.
Here is apples original code:
- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features
inCGImage:(CGImageRef)backgroundImage
withOrientation:(UIDeviceOrientation)orientation
frontFacing:(BOOL)isFrontFacing
{
CGImageRef returnImage = NULL;
CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(backgroundImage), CGImageGetHeight(backgroundImage));
CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size);
CGContextClearRect(bitmapContext, backgroundImageRect);
CGContextDrawImage(bitmapContext, backgroundImageRect, backgroundImage);
CGFloat rotationDegrees = 0.;
switch (orientation) {
case UIDeviceOrientationPortrait:
rotationDegrees = -90.;
break;
case UIDeviceOrientationPortraitUpsideDown:
rotationDegrees = 90.;
break;
case UIDeviceOrientationLandscapeLeft:
if (isFrontFacing) rotationDegrees = 180.;
else rotationDegrees = 0.;
break;
case UIDeviceOrientationLandscapeRight:
if (isFrontFacing) rotationDegrees = 0.;
else rotationDegrees = 180.;
break;
case UIDeviceOrientationFaceUp:
case UIDeviceOrientationFaceDown:
default:
break; // leave the layer in its last known orientation
}
UIImage *rotatedSquareImage = [square imageRotatedByDegrees:rotationDegrees];
// features found by the face detector
for ( CIFaceFeature *ff in features ) {
CGRect faceRect = [ff bounds];
NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect));
CGContextDrawImage(bitmapContext, faceRect, [rotatedSquareImage CGImage]);
}
returnImage = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease (bitmapContext);
return returnImage;
}
and my replacement:
- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features
inCGImage:(CGImageRef)backgroundImage
withOrientation:(UIDeviceOrientation)orientation
frontFacing:(BOOL)isFrontFacing
{
CGImageRef returnImage = NULL;
//I'm only taking pics with one face. This is just for testing
for ( CIFaceFeature *ff in features ) {
CGRect faceRect = [ff bounds];
returnImage = CGImageCreateWithImageInRect(backgroundImage, faceRect);
}
return returnImage;
}
Update *
Based on Wains input, I tried to make my code more like the original but the result was the same:
- (NSArray*)extractFaceImages:(NSArray *)features
fromCGImage:(CGImageRef)sourceImage
withOrientation:(UIDeviceOrientation)orientation
frontFacing:(BOOL)isFrontFacing
{
NSMutableArray *faceImages = [[[NSMutableArray alloc] initWithCapacity:1] autorelease];
CGImageRef returnImage = NULL;
CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(sourceImage), CGImageGetHeight(sourceImage));
CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size);
CGContextClearRect(bitmapContext, backgroundImageRect);
CGContextDrawImage(bitmapContext, backgroundImageRect, sourceImage);
CGFloat rotationDegrees = 0.;
switch (orientation) {
case UIDeviceOrientationPortrait:
rotationDegrees = -90.;
break;
case UIDeviceOrientationPortraitUpsideDown:
rotationDegrees = 90.;
break;
case UIDeviceOrientationLandscapeLeft:
if (isFrontFacing) rotationDegrees = 180.;
else rotationDegrees = 0.;
break;
case UIDeviceOrientationLandscapeRight:
if (isFrontFacing) rotationDegrees = 0.;
else rotationDegrees = 180.;
break;
case UIDeviceOrientationFaceUp:
case UIDeviceOrientationFaceDown:
default:
break; // leave the layer in its last known orientation
}
// features found by the face detector
for ( CIFaceFeature *ff in features ) {
CGRect faceRect = [ff bounds];
NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect));
returnImage = CGBitmapContextCreateImage(bitmapContext);
returnImage = CGImageCreateWithImageInRect(returnImage, faceRect);
UIImage *clippedFace = [UIImage imageWithCGImage:returnImage];
[faceImages addObject:clippedFace];
}
CGContextRelease (bitmapContext);
return faceImages;
}
I took three pictures and logged faceRect with these results;
Pic taken with face positioned near left edge of device. Capture image completely misses face to the right: faceRect={{972, 43.0312}, {673.312, 673.312}}
Pic taken with face positioned in middle of device. Capture image is good: faceRect={{1060.59, 536.625}, {668.25, 668.25}}
Pic taken with face positioned near right edge of device. Capture image completely misses face to the left: faceRect={{982.125, 999.844}, {804.938, 804.938}}
So it appears that "x" and "y" are reversed. I'm holding the device in portrait but faceRect seems to be landscape based. However, I can't figure out what part of Apples original code is accounting for this. The orientation code in that method appears to only affect the red square overlay image itself.
Upvotes: 6
Views: 2114
Reputation: 5149
You are facing this issue because when the image is saved, it is saved vertically flipped. And the faceRect
's position doesn't exactly coincide with the face. You can solve this issue by modifying the faceRect
's poistion in such a way that it is vertically flipped within the returnImage
.
for ( CIFaceFeature *ff in features ) {
faceRect = [ff bounds];
CGRect modifiedRect = CGRectFlipVertical(faceRect,CGRectMake(0,0,CGImageGetWidth(returnImage),CGImageGetHeight(returnImage)));
returnImage = CGImageCreateWithImageInRect(returnImage, modifiedRect);
UIImage *clippedFace = [UIImage imageWithCGImage:returnImage];
[faceImages addObject:clippedFace];
}
The CGRectFlipVertical(CGRect innerRect, CGRect outerRect)
can be defined like so,
CGRect CGRectFlipVertical(CGRect innerRect, CGRect outerRect)
{
CGRect rect = innerRect;
rect.origin.y = outerRect.origin.y + outerRect.size.height - (rect.origin.y + rect.size.height);
return rect;
}
Upvotes: 0
Reputation: 119031
You should keep all of the original code and just add one line before the return (with a tweak to put the image generation inside the loop as you are only cropping the first face):
returnImage = CGImageCreateWithImageInRect(returnImage, faceRect);
This allows the image to be rendered with the correct orientation which means the face rect will be in the correct place.
Upvotes: 3