Wim Haanstra
Wim Haanstra

Reputation: 5998

Cutting out predefined piece of a photo from camera

For an application I am developing, I let the user specify dimensions of an object they want to capture on camera (for example 30" x 40"). The next thing I want to do, is show the camera feed with a cameraOverlayView on top of it, showing nothing but a stroked transparent square which has the right ratio to capture that object.

So I tried 2 things to get this to work:

Use a UIViewController which uses the AVCaptureVideoPreviewLayer to display a view with the live video feed. On top of that feed I display a transparent view, which draws a square with the right dimensions (using the ratio the user specified).

and

In another attempt I created a UIViewController, containing a button which pops up the UIImagePickerController. Using this controller, I also created a view which I attach to the picker using the cameraOverlayView property.

The main problem I am having with both these methods is that the image that is actually captured is always larger then what I see on screen, but I am not entirely sure how to cut out THAT piece of the image, after the picture has been taken.

So for example: My UIImagePickerController is shown, I put an overlay over that showing a square that is 300 x 400px large. The user uses this square to take a picture of their object and centers their object inside this square.

The picture is taken, but instead of a picture that is 320x480 (or 640x960) I get a result that is 3500x2400 (or something like that. It's a completely different ratio then the screen ratio (of course).

How do I then make sure I cut out the right part of the image.

The code that actually calculates the size of the square that should be shown (and should be used to determine what piece of the picture should be cut):

+ (CGRect) getFrameRect:(CGRect) rect forSize:(CGSize) frameSize {
    if (CGSizeEqualToSize(frameSize, CGSizeZero))
        return CGRectZero;

    float maxWidth = rect.size.width - 20;
    float maxHeight = rect.size.height - 20;

    float ratioX = maxWidth / frameSize.width;
    float ratioY = maxHeight / frameSize.height;
    float ratio = MIN(ratioX, ratioY);

    float newWidth = frameSize.width * ratio;
    float newHeight = frameSize.height * ratio;

    float x = (rect.size.width - newWidth) / 2;
    float y = (rect.size.height - newHeight) / 2;

    return CGRectMake(x, y, newWidth, newHeight);
}

This determines the largest square that can be created with the ratio specified in the frameSize parameter, with the dimensions where the square should be drawn in supplied in the rect parameter.

Some solutions come to mind, but I am not sure that is doable.

Upvotes: 3

Views: 545

Answers (1)

Wim Haanstra
Wim Haanstra

Reputation: 5998

Ok, I found the solution:

When you are taking a picture with your camera, the preview screen shows only a part of the photo you are taking.

When your iOS device is in portrait mode, the photo height is scaled down to the height of the screen, and only the middle 640px are shown.

enter image description here

The darker red part is what is shown on screen. So when you take a picture, you need to downsize your image to the max height of your screen to get the right width.

After that I cut out the middle 640x960 pixels to actually get the same image as was shown when taking the picture.

After that the coordinates of my rectangular overlay are the same as with my overlay.

- (void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
    [picker dismissModalViewControllerAnimated:YES];

    UIImage* artImage = [info objectForKey:UIImagePickerControllerOriginalImage];


    CGFloat imageHeightRatio = artImage.size.height / 960;
    CGFloat imageWidth = artImage.size.width / imageHeightRatio;

    CGSize newImageSize = CGSizeMake(imageWidth, 960);
    artImage = [artImage imageByScalingProportionallyToSize:newImageSize];

    CGRect cutOutRect = CGRectMake((artImage.size.width / 2) - (640 / 2), 0, 640, 960);

    artImage = [self imageByCropping:artImage toRect:cutOutRect];

    CGRect imageCutRect = [FrameSizeCalculations getFrameRect:CGRectMake(0,0, artImage.size.width, artImage.size.height) forSize:self.frameSize];
    artImage = [self imageByCropping:artImage toRect:imageCutRect];

    CGRect imageViewRect = CGRectInset(_containmentView.bounds, 10, 10);

    NSLog(@"ContainmentView: %f x %f x %f x %f",
          _containmentView.frame.origin.x,
          _containmentView.frame.origin.y,
          _containmentView.frame.size.width,
          _containmentView.frame.size.height
          );

    NSLog(@"imageViewRect: %f x %f x %f x %f",
          imageViewRect.origin.x,
          imageViewRect.origin.y,
          imageViewRect.size.width,
          imageViewRect.size.height
          );

    _imageView.frame = [FrameSizeCalculations getFrameRect:imageViewRect forSize:self.frameSize];

    NSLog(@"imageViewRect: %f x %f x %f x %f",
          _imageView.frame.origin.x,
          _imageView.frame.origin.y,
          _imageView.frame.size.width,
          _imageView.frame.size.height
          );

    _imageView.contentMode = UIViewContentModeScaleAspectFill;
    _imageView.image = artImage;
}

Upvotes: 3

Related Questions