Avery Vine
Avery Vine

Reputation: 174

Check if subsection of UIImage is light or dark

I'm attempting to overlay a chevron button that will allow the user to dismiss the current view. The colour of the chevron should be light on dark images, and dark on light images. I've attached a screenshot of what I'm describing.

enter image description here

However, there is a significant performance impact when trying to calculate the lightness/darkness of an image, which I'm doing like so (operating on `CGImage):

var isDark: Bool {
    guard let imageData = dataProvider?.data else { return false }
    guard let ptr = CFDataGetBytePtr(imageData) else { return false }
    let length = CFDataGetLength(imageData)
    let threshold = Int(Double(width * height) * 0.45)
    var darkPixels = 0
    for i in stride(from: 0, to: length, by: 4) {
        let r = ptr[i]
        let g = ptr[i + 1]
        let b = ptr[i + 2]
        let luminance = (0.299 * Double(r) + 0.587 * Double(g) + 0.114 * Double(b))
        if luminance < 150 {
            darkPixels += 1
            if darkPixels > threshold {
                return true
            }
        }
    }
    return false
}

In addition, it doesn't do well when the particular area under the chevron is dark, but the rest of the image is light, for example.

I'd like to calculate it for just a small subsection of the image, since the chevron is very small. I tried cropping the image using CGImage's cropping(to rect: CGRect), but the challenge is that the image is set to aspect fill, meaning the top of the UIImageView's frame isn't the top of the UIImage (e.g. the image might be zoomed in and centred). Is there a way that I can isolate just the part of the image that appears below the chevron's frame, after the image has been adjusted by the aspect fill?

Edit

I was able to achieve this thanks to the first link in the accepted answer. I created a series of extensions that I think should work for situations other than mine.

extension UIImage {
    var isDark: Bool {
        return cgImage?.isDark ?? false
    }
}

extension CGImage {
    var isDark: Bool {
        guard let imageData = dataProvider?.data else { return false }
        guard let ptr = CFDataGetBytePtr(imageData) else { return false }
        let length = CFDataGetLength(imageData)
        let threshold = Int(Double(width * height) * 0.45)
        var darkPixels = 0
        for i in stride(from: 0, to: length, by: 4) {
            let r = ptr[i]
            let g = ptr[i + 1]
            let b = ptr[i + 2]
            let luminance = (0.299 * Double(r) + 0.587 * Double(g) + 0.114 * Double(b))
            if luminance < 150 {
                darkPixels += 1
                if darkPixels > threshold {
                    return true
                }
            }
        }
        return false
    }

    func cropping(to rect: CGRect, scale: CGFloat) -> CGImage? {
        let scaledRect = CGRect(x: rect.minX * scale, y: rect.minY * scale, width: rect.width * scale, height: rect.height * scale)
        return self.cropping(to: scaledRect)
    }
}

extension UIImageView {
    func hasDarkImage(at subsection: CGRect) -> Bool {
        guard let image = image, let aspectSize = aspectFillSize() else { return false }
        let scale = image.size.width / frame.size.width
        let cropRect = CGRect(x: (aspectSize.width - frame.width) / 2,
                              y: (aspectSize.height - frame.height) / 2,
                              width: aspectSize.width,
                              height: frame.height)
        let croppedImage = image.cgImage?
            .cropping(to: cropRect, scale: scale)?
            .cropping(to: subsection, scale: scale)
        return croppedImage?.isDark ?? false
    }

    private func aspectFillSize() -> CGSize? {
        guard let image = image else { return nil }
        var aspectFillSize = CGSize(width: frame.width, height: frame.height)
        let widthScale = frame.width / image.size.width
        let heightScale = frame.height / image.size.height
        if heightScale > widthScale {
            aspectFillSize.width = heightScale * image.size.width
        }
        else if widthScale > heightScale {
            aspectFillSize.height = widthScale * image.size.height
        }
        return aspectFillSize
    }
}

Upvotes: 5

Views: 1646

Answers (1)

chedabob
chedabob

Reputation: 5881

There's a couple of options here for finding the size of your image once it's been fitted to the view: How to know the image size after applying aspect fit for the image in an UIImageView

Once you've got that, you can figure out where the chevron lies (you may need to convert its frame first https://developer.apple.com/documentation/uikit/uiview/1622498-convert)

If the performance was still lacking, I'd look into using CoreImage to perform the calculations: https://www.hackingwithswift.com/example-code/media/how-to-read-the-average-color-of-a-uiimage-using-ciareaaverage

There's a few ways of doing it with CoreImage, but getting the average is the simplest.

Upvotes: 2

Related Questions