BlueskyMed
BlueskyMed

Reputation: 1305

UIImageView: How to capture just visible content after Pan & Zoom

I have a UIImageView which is loaded by an ImagePicker with a photo (3k x 4k sized) which is filled into the view using AspectFit mode. I then pan and zoom the image with GestureRecognizers which adjust the transform scale and translation of the view. This works well, here is the code:

@objc private func startZooming(_ sender: UIPinchGestureRecognizer) {
  let scaleResult = sender.view?.transform.scaledBy(x: sender.scale, y: sender.scale)
  guard let scale = scaleResult, scale.a > 1, scale.d > 1 else { return }
  sender.view?.transform = scale
  sender.scale = 1
}
  
@objc private func startPanning(_ sender: UIPanGestureRecognizer) {
    let translate = sender.translation(in: mainImageView)
    let center = mainImageView.center
    mainImageView.center = CGPoint(x: center.x + translate.x, y: center.y + translate.y)
    self.currImageTranslation.x += translate.x
    self.currImageTranslation.y += translate.y
    sender.setTranslation(CGPoint(x: 0, y: 0), in: mainImageView)
}

My problem is that I need the resulting visual result of the transform as a CGImage to feed into an visual inference engine. When I get the image from the UIImageView:

let image = myUIImageView.image // I get the original untransformed image

when I try:

UIGraphicsBeginImageContextWithOptions(image!.size, false, 0.0)
  myImageView.layer.render(in: UIGraphicsGetCurrentContext()!)
  let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()

newImage is also just the original image (untransformed)

when I try:

let rect = mainImageView.bounds
let scale = UIScreen.main.scale
var t = self.currImageTranslation
let tScale = mainImageView.transform.a
let zoomOffsetFactorX = ((t.x /  rect.size.width) * tScale)
let zoomOffsetFactorY = ((t.y / rect.size.height) * tScale)
t.x = t.x - zoomOffsetFactorX
t.y = (t.y + 20.0) + zoomOffsetFactorY

// create a context
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()!
let transform = mainImageView.transform
let imrect = CGRect(origin: t, size: rect.size)
context.concatenate(transform)
let tempImage = mainImageView.image
tempImage!.draw(in: imrect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

the newImage translates properly, but only at scale = 1.0, zooming with or without scaling creates fractional images, i.e., clipped to the zero, zero origin.

I would like to generate a UIImage of just what the user sees on the screen in that ImageView. Can anyone help?

With @DonMag's generous suggestions, I tried the following:

func enableZoom() {
  let pinchGesture = UIPinchGestureRecognizer(target: self, 
                action: #selector(startZooming(_:)))
  let panGesture = UIPanGestureRecognizer(target: self, 
                action: #selector(startPanning(_:)))
    containerView.isUserInteractionEnabled = true
    containerView.addGestureRecognizer(pinchGesture)
    containerView.addGestureRecognizer(panGesture)
}


@objc private func startZooming(_ sender: UIPinchGestureRecognizer) {
  let scaleResult = sender.view?.transform.scaledBy(x: sender.scale, y: sender.scale)
  guard let scale = scaleResult, scale.a > 1, scale.d > 1 else { return }
  sender.view?.transform = scale
  newImage()
  sender.scale = 1
}
  
@objc private func startPanning(_ sender: UIPanGestureRecognizer) {
    let translate = sender.translation(in: mainImageView)
    let center = mainImageView.center
    mainImageView.center = CGPoint(x: center.x + translate.x, y: center.y + translate.y)
    self.currImageTranslation.x += translate.x
    self.currImageTranslation.y += translate.y
    sender.setTranslation(CGPoint(x: 0, y: 0), in: mainImageView)
    newImage()
}

//MARK:- create cropped/panned image and infer
func newImage() -> Void {
    
    let format = UIGraphicsImageRendererFormat()
    // we want a 1:1 points-to-pixels output
    format.scale = 1
    let renderer = UIGraphicsImageRenderer(size: containerView.bounds.size, format: format)
    let image = renderer.image { ctx in
        containerView.drawHierarchy(in: containerView.bounds, afterScreenUpdates: true)
    }
    self.currCGImage = image.cgImage
    //repeatInference()   // runs only once every 0.3 s
    return
}
  

Now the "newImage" correctly reflects the translations created by the panGesture, but seemingly ignores the pinchGesture changes in scale. The drawHierarchy seems to be ignoring the imposed transforms in the UIView. Where am I going wrong?

Upvotes: 0

Views: 693

Answers (2)

BlueskyMed
BlueskyMed

Reputation: 1305

@DonMag put me on the right track. However, to get the correct scaling in the grabbed image, I needed to apply the scaleBy transform to the UIImageView, not the containerview. Also, I needed to set the format scale to the current scale factor from the pinchGesture. Here is the working version:

    func enableZoom() {
      let pinchGesture = UIPinchGestureRecognizer(target: self, action: #selector(startZooming(_:)))
      let panGesture = UIPanGestureRecognizer(target: self, action: #selector(startPanning(_:)))
        containerView.isUserInteractionEnabled = true
        containerView.addGestureRecognizer(pinchGesture)
        containerView.addGestureRecognizer(panGesture)
    }

    @objc private func startZooming(_ sender: UIPinchGestureRecognizer) {
//        let scaleResult = sender.view?.transform.scaledBy(x: sender.scale, y: sender.scale)
//        guard let scale = scaleResult, scale.a > 1, scale.d > 1 else { return }
//        sender.view?.transform = scale
      let scaleResult = mainImageView?.transform.scaledBy(x: sender.scale, y: sender.scale)
      guard let scale = scaleResult, scale.a > 1, scale.d > 1 else { return }
      mainImageView.transform = scale
      currImageScale = sender.scale
      newImage()
      sender.scale = 1
    }
      
    @objc private func startPanning(_ sender: UIPanGestureRecognizer) {
        let translate = sender.translation(in: mainImageView)
        let center = mainImageView.center
        mainImageView.center = CGPoint(x: center.x + translate.x, y: center.y + translate.y)
        self.currImageTranslation.x += translate.x
        self.currImageTranslation.y += translate.y
        sender.setTranslation(CGPoint(x: 0, y: 0), in: mainImageView)
        newImage()
    }
    
    //MARK:- create cropped/panned image and infer
    func newImage() -> Void {
        
        let format = UIGraphicsImageRendererFormat()
        // we want a 1:1 points-to-pixels output
        format.scale = currImageScale
        let renderer = UIGraphicsImageRenderer(size: containerView.bounds.size, format: format)
        let image = renderer.image { ctx in
            containerView.drawHierarchy(in: containerView.bounds, afterScreenUpdates: true)
        }
        self.currCGImage = image.cgImage
        repeatInference()   // runs only once every 0.3 s
        return
    }
    
    

Upvotes: 0

DonMag
DonMag

Reputation: 77672

I'm not entirely sure what you're doing with your transform code, but this may work for you -- and is much simpler...

If I understand correctly, you have a UIImageView with .aspectFit and it is in a UIView "container" and you apply a scale and translate transform to the image view:

    let format = UIGraphicsImageRendererFormat()
    // we want a 1:1 points-to-pixels output
    format.scale = 1
    let renderer = UIGraphicsImageRenderer(size: containerView.bounds.size, format: format)
    let image = renderer.image { ctx in
        containerView.drawHierarchy(in: containerView.bounds, afterScreenUpdates: true)
    }

Here is a complete example implementation:

class TransImageViewController: UIViewController {

    let containerView = UIView()
    let imageView = UIImageView()

    override func viewDidLoad() {
        super.viewDidLoad()
        
        guard let img = UIImage(named: "bkg_2400x1600") else {
            fatalError("Could not load the image!!!")
        }
        
        let stack = UIStackView()
        stack.spacing = 20
        stack.distribution = .fillEqually
        
        let b1 = UIButton()
        let b2 = UIButton()
        
        [b1, b2].forEach { b in
            b.backgroundColor = .red
            b.setTitleColor(.white, for: .normal)
            b.setTitleColor(.gray, for: .highlighted)
        }
        
        b1.setTitle("Transform", for: [])
        b2.setTitle("Capture", for: [])
        
        stack.addArrangedSubview(b1)
        stack.addArrangedSubview(b2)

        [stack, containerView, imageView].forEach {
            $0.translatesAutoresizingMaskIntoConstraints = false
        }
        
        view.addSubview(stack)
        view.addSubview(containerView)
        containerView.addSubview(imageView)
        
        let g = view.safeAreaLayoutGuide
        NSLayoutConstraint.activate([
            
            // buttons stack at top
            stack.topAnchor.constraint(equalTo: g.topAnchor, constant: 20.0),
            stack.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 40.0),
            stack.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -40.0),
            
            // container 400x500 centered
            containerView.widthAnchor.constraint(equalToConstant: 400),
            containerView.heightAnchor.constraint(equalToConstant: 500),
            containerView.centerXAnchor.constraint(equalTo: g.centerXAnchor),
            containerView.centerYAnchor.constraint(equalTo: g.centerYAnchor),
            
            // imageView constrained all 4 sides to container
            imageView.topAnchor.constraint(equalTo: containerView.topAnchor),
            imageView.leadingAnchor.constraint(equalTo: containerView.leadingAnchor),
            imageView.trailingAnchor.constraint(equalTo: containerView.trailingAnchor),
            imageView.bottomAnchor.constraint(equalTo: containerView.bottomAnchor),

        ])
        
        containerView.clipsToBounds = true
        imageView.contentMode = .scaleAspectFit
        imageView.image = img
        
        view.backgroundColor = .blue
        containerView.backgroundColor = .yellow
        imageView.backgroundColor = .orange

        b1.addTarget(self, action: #selector(self.doTransform), for: .touchUpInside)
        b2.addTarget(self, action: #selector(self.grabImage), for: .touchUpInside)
    }

    @objc func doTransform() -> Void {
        var t = CGAffineTransform.identity
        t = t.scaledBy(x: 4.0, y: 4.0)
        t = t.translatedBy(x: -120, y: 40)
        imageView.transform = t
    }
    
    @objc func grabImage() -> Void {

        let format = UIGraphicsImageRendererFormat()
        // we want a 1:1 points-to-pixels output
        format.scale = 1
        let renderer = UIGraphicsImageRenderer(size: containerView.bounds.size, format: format)
        let image = renderer.image { ctx in
            containerView.drawHierarchy(in: containerView.bounds, afterScreenUpdates: true)
        }
        
        // do what you want with the resulting image
        print("Resulting image size:", image.size)

    }

}

Using this image as my "bkg_2400x1600" image asset:

enter image description here

The above code starts like this (my "container" view is 400x500 pts):

enter image description here

Taping the "Transform" button applies .scaledBy(x: 4.0, y: 4.0) and .translatedBy(x: -120, y: 40):

enter image description here

and then tapping "Capture" gives me this 400x500 pixel image:

enter image description here

Upvotes: 1

Related Questions