swalkner
swalkner

Reputation: 17379

Swift 3: get color of pixel in UIImage (better: UIImageView)

I tried different solutions (e.g. this one), but the color I get back looks a bit different than in the real image. I guess it's because the image is only RGB, not RGBA. May that be an issue?

Related issue: if the UIImage has contentMode = .scaleAspectFill, do I have to do a recalculation of the image or can I just use imageView.image?

EDIT:

I tried with this extension:

extension CALayer {
    func getPixelColor(point: CGPoint) -> CGColor {
        var pixel: [CUnsignedChar] = [0, 0, 0, 0]

        let colorSpace = CGColorSpaceCreateDeviceRGB()
        let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)

        let context = CGContext(data: &pixel, width: 1, height: 1, bitsPerComponent: 8, bytesPerRow: 4, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)

        context!.translateBy(x: -point.x, y: -point.y)

        self.render(in: context!)

        let red: CGFloat   = CGFloat(pixel[0]) / 255.0
        let green: CGFloat = CGFloat(pixel[1]) / 255.0
        let blue: CGFloat  = CGFloat(pixel[2]) / 255.0
        let alpha: CGFloat = CGFloat(pixel[3]) / 255.0


        let color = UIColor(red:red, green: green, blue:blue, alpha:alpha)

        return color.cgColor
    }
}

but for some images it seems as if the coordinate system is turned around, for others I get really wrong values... what am I missing here?

EDIT 2:

I try with these images:

https://dl.dropboxusercontent.com/u/119600/gradient.png https://dl.dropboxusercontent.com/u/119600/[email protected]

but I do get wrong values. They are embedded in a UIImageView but I convert the coordinates:

private func convertScreenPointToImage(point: CGPoint) -> CGPoint {
    let widthMultiplier = gradientImage.size.width / UIScreen.main.bounds.width
    let heightMultiplier = gradientImage.size.height / UIScreen.main.bounds.height

    return CGPoint(x: point.x * widthMultiplier, y: point.y * heightMultiplier)
}

This one

enter image description here

gives me === Optional((51, 76, 184, 255)) when running on the iPhone 7 simulator, which is not correct...

Upvotes: 3

Views: 2550

Answers (1)

Josh Homann
Josh Homann

Reputation: 16347

I wrote this is a playground. I index into the image data with a pointer and grab the rgba values:

func pixel(in image: UIImage, at point: CGPoint) -> (UInt8, UInt8, UInt8, UInt8)? {
    let width = Int(image.size.width)
    let height = Int(image.size.height)
    let x = Int(point.x)
    let y = Int(point.y)
    guard x < width && y < height else {
        return nil
    }
    guard let cfData:CFData = image.cgImage?.dataProvider?.data, let pointer = CFDataGetBytePtr(cfData) else {
        return nil
    }
    let bytesPerPixel = 4
    let offset = (x + y * width) * bytesPerPixel
    return (pointer[offset], pointer[offset + 1], pointer[offset + 2], pointer[offset + 3])
}

let image = UIImage(named: "t.png")!
if let (r,g,b,a) = pixel(in: image, at: CGPoint(x: 1, y:2)) {
    print ("Red: \(r), Green: \(g), Blue: \(b), Alpha: \(a)")
}

Note that if you use this on a UIImage that is a property of a UIImageView the pixel coordinates are those of the actual image in its original resolution, not the screen coordinates of the scaled UIImageView. Also it tried with RGB Jpg and RGBA PNG and the both get imported as 32 bit RGBA images so it works for both.

Upvotes: 3

Related Questions