shadowmoses
shadowmoses

Reputation: 343

Capturing still image from AVFoundation that matches viewfinder border on AVCaptureVideoPreviewLayer in Swift

Trying to capture what is in the green viewfinder after taking a picture.

Please see images:

image 1/3

image 2/3

image 3/3

This is what the code is doing currently:

Before taking the picture:

image 4/4

After taking the picture (scale of resulting image is not correct, as it does not match what is in the green viewfinder): enter image description here

As you can see, the image needs to be scaled up to fit what was originally contained within the green viewfinder. Even when I calculate the correct scaling ratio (for the iPhone 6, I need to multiply the dimensions of the captured image by 1.334, it doesn't work)

Any ideas?

Upvotes: 3

Views: 1173

Answers (1)

shadowmoses
shadowmoses

Reputation: 343

Steps to solve this:

First, get the full size image: I also used an extension to the UIImage class called "correctlyOriented".

let correctImage = UIImage(data: imageData!)!.correctlyOriented()

All this does is un-rotate the iPhone image, so a portrait image (taken with home button on the bottom of the iPhone) is oriented as expected. That extension is below:

extension UIImage {

func correctlyOriented() -> UIImage {

if imageOrientation == .up {
    return self
}

// We need to calculate the proper transformation to make the image upright.
// We do it in 2 steps: Rotate if Left/Right/Down, and then flip if Mirrored.
var transform = CGAffineTransform.identity

switch imageOrientation {
case .down, .downMirrored:
    transform = transform.translatedBy(x: size.width, y: size.height)
    transform = transform.rotated(by: CGFloat.pi)
case .left, .leftMirrored:
    transform = transform.translatedBy(x: size.width, y: 0)
    transform = transform.rotated(by:  CGFloat.pi * 0.5)
case .right, .rightMirrored:
    transform = transform.translatedBy(x: 0, y: size.height)
    transform = transform.rotated(by:  -CGFloat.pi * 0.5)
default:
    break
}

switch imageOrientation {
case .upMirrored, .downMirrored:
    transform = transform.translatedBy(x: size.width, y: 0)
    transform = transform.scaledBy(x: -1, y: 1)
case .leftMirrored, .rightMirrored:
    transform = transform.translatedBy(x: size.height, y: 0)
    transform = transform.scaledBy(x: -1, y: 1)
default:
    break
}

// Now we draw the underlying CGImage into a new context, applying the transform
// calculated above.
guard
    let cgImage = cgImage,
    let colorSpace = cgImage.colorSpace,
    let context = CGContext(data: nil,
                            width: Int(size.width),
                            height: Int(size.height),
                            bitsPerComponent: cgImage.bitsPerComponent,
                            bytesPerRow: 0,
                            space: colorSpace,
                            bitmapInfo: cgImage.bitmapInfo.rawValue) else {
                                return self
}

context.concatenate(transform)

switch imageOrientation {
case .left, .leftMirrored, .right, .rightMirrored:
    context.draw(cgImage, in: CGRect(x: 0, y: 0, width: size.height, height: size.width))
default:
    context.draw(cgImage, in: CGRect(origin: .zero, size: size))
}

// And now we just create a new UIImage from the drawing context
guard let rotatedCGImage = context.makeImage() else {
    return self
}

return UIImage(cgImage: rotatedCGImage)
}

Next, calculate the height factor:

let heightFactor = self.view.frame.height / correctImage.size.height

Create a new CGSize based on the height factor, and then resize the image (using a resize image function, not shown):

let newSize = CGSize(width: correctImage.size.width * heightFactor, height: correctImage.size.height * heightFactor)

let correctResizedImage = self.imageWithImage(image: correctImage, scaledToSize: newSize)

Now, we have an image that is the same height as our device, but wider, due to the 4:3 aspect ratio of the iPhone camera vs the 16:9 aspect ratio of the iPhone screen. So, crop the image to be the same size as the device screen:

let screenCrop: CGRect = CGRect(x: (newSize.width - self.view.bounds.width) * 0.5,
                                                y: 0,
                                                width: self.view.bounds.width,
                                                height: self.view.bounds.height)


var correctScreenCroppedImage = self.crop(image: correctResizedImage, to: screenCrop)

Lastly, we need to replicate the "crop" created by the green "viewfinder". So, we perform another crop to make the final image match:

let correctCrop: CGRect = CGRect(x: 0,
                                          y: (correctScreenCroppedImage!.size.height * 0.5) - (correctScreenCroppedImage!.size.width * 0.5),
                                          width: correctScreenCroppedImage!.size.width,
                                          height: correctScreenCroppedImage!.size.width)

var correctCroppedImage = self.crop(image: correctScreenCroppedImage!, to: correctCrop)

Credit for this answer goes to @damirstuhec

Upvotes: 2

Related Questions