theZ3r0CooL
theZ3r0CooL

Reputation: 227

Masking an image to the transparent section of another Image in SwiftUI

I have a ZStack like such

ZStack {
    Image("source")
    Image("source")
}

Where one layer is a solid square image and the other is hexagonal image that's slightly smaller than the image behind it; with a hexagonal area of transparency inside of it, like a frame for the image behind it. I am trying to clip the image behind the frame so that it only shows in the transparent area in the center of the frame image. I tried declaring the frame image as a var that I use in the view and as a mask for the image behind it, but this obviously crops the image to the exact shape of the frame. My thinking was that I could invert the alpha channel of my frame image to show the picture behind it, where it is transparent. But I haven't had any luck inverting the alpha channel of the image. That looked as such:

ZStack {
    let mImg = Image("source")
    // img
    Image("source")
      .mask(mImg) //attempting to invert alpha channel of frame to be used here
    // frame
    mImg
}

After not being able to invert the alpha channel, I also thought maybe blendMode would work, with the frame behind the img and the the blendMode being applied to the img so that the frame is drawn over the img which is drawn only in areas the frame is transparent like this:

ZStack {
    // frame
    Image("source")
    // img
    Image("source")
       .blenMode(.destinationOver)
}

which resulted in the frame being drawn on top, but the image disappearing entirely.. So then I read about .destinationOut which displays the bottom layer only where the top is transparent. I figured this:

ZStack {
    // img
    Image("source")
    // frame
    Image("source")
       .blenMode(.destinationOver)
}

would mean only the bottom layer (img) would be drawn, but only in areas the top layer (frame) was transparent. Then I could layer an additional frame layer on top of that since it would only be used for blending in those lines. However, this blend mode acts nothing like what the documentation says and in fact still draws both layers.. but the frame on top of the img layer is solid black..

needless to say I have wasted more time than I would like to admit on the matter and I feel like I'm making something easy very complicated because I'm missing one small change in modifier order or blend or mask option.

Any suggestions, info or help anyone can provide on the topic is greatly appreciated as always. I can also mock up a short working example struct if anyone wants to try playing with the code to solve the issue. Thank you for anytime spent helping me out in advanced.

Example Images:

frame image to be framed inside transparent center of frame image

Update:

So I used this UIImage extension:

extension UIImage {
    
    class func imageByCombiningImage(firstImageUrl: String, withImageUrl: String) -> UIImage {
        
        let firstImage = try? UIImage(withContentsOfUrl: URL(string: firstImageUrl)!)
        let secondImage = try? UIImage(withContentsOfUrl: URL(string: withImageUrl)!)

        let newImageWidth  = max(firstImage!.size.width,  secondImage!.size.width )
        let newImageHeight = max(firstImage!.size.height, secondImage!.size.height)
        let newImageSize = CGSize(width : newImageWidth, height: newImageHeight)


        UIGraphicsBeginImageContextWithOptions(newImageSize, false, UIScreen.main.scale)

        let firstImageDrawX  = round((newImageSize.width  - firstImage!.size.width  ) / 2)
        let firstImageDrawY  = round((newImageSize.height - firstImage!.size.height ) / 2)

        let secondImageDrawX = round((newImageSize.width  - secondImage!.size.width ) / 2)
        let secondImageDrawY = round((newImageSize.height - secondImage!.size.height) / 2)

        firstImage!.draw(at: CGPoint(x: firstImageDrawX,  y: firstImageDrawY))
        secondImage!.draw(at: CGPoint(x: secondImageDrawX, y: secondImageDrawY), blendMode: .sourceAtop, alpha: 1.0)

        let image = UIGraphicsGetImageFromCurrentImageContext()

        UIGraphicsEndImageContext()
        
        return image!
    }
    
    convenience init?(withContentsOfUrl url: URL) throws {
           let imageData = try Data(contentsOf: url)
           self.init(data: imageData)
    }
}

where 'firstImage' is the picture and 'secondImage' is the frame, and it crops the frame on top to the border of the image in such a why none of the frame bleeds out from the image dimensions. This is the behavior I am looking to achieve to crop the edges of the image to the outline of the frame; however, if I flip the order such that the image will be cropped to the edges of the frame, the entire image disappears leaving a square 'cutout' on top of the frame image.

Upvotes: 0

Views: 1576

Answers (1)

theZ3r0CooL
theZ3r0CooL

Reputation: 227

Okay I figured it out. I am using the following functions:

public func roundedPolygonPath(rect: CGRect, lineWidth: CGFloat, sides: NSInteger, cornerRadius: CGFloat, rotationOffset: CGFloat = 0) -> UIBezierPath {
    let path = UIBezierPath()
    let theta: CGFloat = CGFloat(2.0 * Double.pi) / CGFloat(sides)
    let width = min(rect.size.width, rect.size.height)
    
    let center = CGPoint(x: rect.origin.x + width / 2.0, y: rect.origin.y + width / 2.0)
    let radius = (width - lineWidth + cornerRadius - (cos(theta) * cornerRadius)) / 2.0
    
    var angle = CGFloat(rotationOffset)
    
    let corner = CGPoint(x: center.x + (radius - cornerRadius) * cos(angle), y: center.y + (radius - cornerRadius) * sin(angle))
    path.move(to: CGPoint(x: corner.x + cornerRadius * cos(angle + theta), y: corner.y + cornerRadius * sin(angle + theta)))
    
    for _ in 0..<sides {
        angle += theta
        
        let corner = CGPoint(x: center.x + (radius - cornerRadius) * cos(angle), y: center.y + (radius - cornerRadius) * sin(angle))
        let tip = CGPoint(x: center.x + radius * cos(angle), y: center.y + radius * sin(angle))
        let start = CGPoint(x: corner.x + cornerRadius * cos(angle - theta), y: corner.y + cornerRadius * sin(angle - theta))
        let end = CGPoint(x: corner.x + cornerRadius * cos(angle + theta), y: corner.y + cornerRadius * sin(angle + theta))
        
        path.addLine(to: start)
        path.addQuadCurve(to: end, controlPoint: tip)
    }
    
    path.close()
    
    let bounds = path.bounds
    let transform = CGAffineTransform(translationX: -bounds.origin.x + rect.origin.x + lineWidth / 2.0, y: -bounds.origin.y + rect.origin.y + lineWidth / 2.0)
    path.apply(transform)
    
    return path
}

public func createImage(layer: CALayer) -> UIImage! {
    let size = CGSize(width: layer.frame.maxX, height: layer.frame.maxY)
    UIGraphicsBeginImageContextWithOptions(size, layer.isOpaque, 0.0)
    let ctx = UIGraphicsGetCurrentContext()!
    
    layer.render(in: ctx)
    let image = UIGraphicsGetImageFromCurrentImageContext()
    
    UIGraphicsEndImageContext()
    
    return image!
}

In combination with my UIImage extension:

extension UIImage {
    
    class func imageByCombiningImage(firstImage: UIImage, secondImage: UIImage) -> UIImage {
        
        let newImageWidth  = max(firstImage.size.width,  secondImage.size.width )
        let newImageHeight = max(firstImage.size.height, secondImage.size.height)
        let newImageSize = CGSize(width : newImageWidth, height: newImageHeight)


        UIGraphicsBeginImageContextWithOptions(newImageSize, false, UIScreen.main.scale)

        let firstImageDrawX  = round((newImageSize.width  - firstImage.size.width  ) / 2)
        let firstImageDrawY  = round((newImageSize.height - firstImage.size.height ) / 2)

        let secondImageDrawX = round((newImageSize.width  - secondImage.size.width ) / 2)
        let secondImageDrawY = round((newImageSize.height - secondImage.size.height) / 2)

        firstImage.draw(at: CGPoint(x: firstImageDrawX,  y: firstImageDrawY))
        secondImage.draw(at: CGPoint(x: secondImageDrawX, y: secondImageDrawY))

        let image = UIGraphicsGetImageFromCurrentImageContext()

        UIGraphicsEndImageContext()
        
        return image!
    }
    
    convenience init?(withContentsOfUrl url: URL) throws {
           let imageData = try Data(contentsOf: url)
           self.init(data: imageData)
    }
}

To make my framed Image as follows:

func getProfileIcon() -> UIImage {
    let frameSrc = playerProfileObject.levelFrame ?? ""
    let portraitSrc = playerProfileObject.portrait ?? ""
    
    let frameImage = try? UIImage(withContentsOfUrl: URL(string: frameSrc)!)
    let portraitImage = try? UIImage(withContentsOfUrl: URL(string: portraitSrc)!)
    
    let path = roundedPolygonPath(rect: CGRect(x: 0.0, y: 0.0, width: 150.0, height: 150.0), lineWidth: CGFloat(2.0), sides: 6, cornerRadius: 15.0, rotationOffset: CGFloat(Double.pi / 2.0))
    
    let imageLayer = CAShapeLayer()
    imageLayer.frame = CGRect(x: 0.0, y: 0.0, width: path.bounds.width, height: path.bounds.height)
    imageLayer.path = path.cgPath
    imageLayer.fillColor = UIColor(patternImage: portraitImage!).cgColor
    
    return UIImage.imageByCombiningImage(firstImage: createImage(layer: imageLayer), secondImage: frameImage!)
}

The only trouble now, is that the image slightly sticks out from under the frame still and is also a little offset. But those issues are not as hard to figure out as clipping the image to the appropriate shape was. In the end I did end up going with drawing a hexagonal path for the layer mask, but with the amount of blendModes and mask functionality in swiftUI, I'm certain there is or will be a better way that involves less processing steps. If anyone currently knows a better way, I will still be interested and highly grateful to you for sharing with me!

Sorry for the indentation, SO doesn't let you tab indents even though they expect you to add code blocks in this tiny box.

Upvotes: 1

Related Questions