Marco83
Marco83

Reputation: 1191

CoreGraphics: performance hit in drawing NSImage if image is scaled

I'm writing some swift code for macOS 10.14. I'm hitting a performance bottleneck and isolated it to the following rendering code, rewritten to remove all irrelevant parts.

I have an NSImage (originall a JPG) and I am resizing it with the following code:

extension NSImage {

    func resized(size: NSSize) -> NSImage {
        let cgImage = self.cgImage!
        let bitsPerComponent = cgImage.bitsPerComponent
        let bytesPerRow = cgImage.bytesPerRow
        let colorSpace = cgImage.colorSpace!
        let bitmapInfo = CGImageAlphaInfo.noneSkipLast
        let context = CGContext(data: nil,
                                width: Int(cgImage.width / 2),
                                height: Int(cgImage.height / 2),
                                bitsPerComponent: bitsPerComponent,
                                bytesPerRow: bytesPerRow,
                                space: colorSpace,
                                bitmapInfo: bitmapInfo.rawValue)!

        context.interpolationQuality = .high
        let newSize = size
        context.draw(cgImage,
                     in: NSRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
        let img = context.makeImage()!
        return NSImage(cgImage: img, size: newSize)
    }

    var cgImage: CGImage? {
        get {
            guard let imageData = self.tiffRepresentation else { return nil }
            guard let sourceData = CGImageSourceCreateWithData(imageData as CFData, nil) else { return nil }
            return CGImageSourceCreateImageAtIndex(sourceData, 0, nil)
        }
    }
}

I then run a performance test with the following code: import XCTest @testable import TestRendering

class TestRenderingTests: XCTestCase {

    static var testImage: NSImage {
        let url = Bundle(for: TestRenderingTests.self).url(forResource: "photo", withExtension: ".jpg")
        return NSImage(contentsOf: url!)!
    }

    func testPerformanceExample() {

        let originalImage = type(of: self).testImage
        let multiFactor: CGFloat = 0.99
        let resizedSize = NSSize(
            width: originalImage.size.width * multiFactor,
            height: originalImage.size.height * multiFactor
        )
        let resizedImage = originalImage.resized(size: resizedSize)

        let baseImage = NSImage(size: resizedSize)
        let rect = NSRect(
            origin: NSPoint.zero,
            size: resizedSize
        )

        self.measure {
            baseImage.lockFocus()
            resizedImage.draw(in: rect, from: rect, operation: .copy, fraction: 1)
            baseImage.unlockFocus()
        }
    }

}

If I run the performance test with scale factor multiFactor of 1, I get a certain value for the measured block which I use as baseline.

If I then change the scale factor multiFactor to 0.99, the performance of the measured block becomes 59% worse.

performance deterioration

Why this performance hit? My theory is that the image resize function, when the size is not equal to the original size, somehow creates a new representation of the image that needs to be pre-processed further every time it is rendered. If the image is the original size, somehow it just uses the original image and not pre-processing is needed.

I came up with this theory looking at the stack trace while profiling the two versions of the test.

The following stack trace is from when the scale factor is 1 (image size not changed):

enter image description here

The following stack trace is from when the scale factor is 0.99:

enter image description here

The functions in the call stack do not match: argb32_image_mark_argb32 vs. argb32_sample_argb32.

Is it possible to rewrite the resized(size:) function in a way that it create an image that does not need to be "sampled" every time it is rendered?

For reference, I'm using the following image in the test: enter image description here

Upvotes: 1

Views: 603

Answers (1)

Marco83
Marco83

Reputation: 1191

Thanks to Ken Thomases for the tip, I updated the resize code to:

  • make sure that the size is always an Int
  • let CoreGraphics compute the right bytesPerRow automatically
  • do not divide the size by 2 (not sure how it got there in the first place)

The resulting code is:

func resized(size: NSSize) -> NSImage {
    let intSize = NSSize(width: Int(size.width), height: Int(size.height))
    let cgImage = self.cgImage!
    let bitsPerComponent = cgImage.bitsPerComponent
    let colorSpace = cgImage.colorSpace!
    let bitmapInfo = CGImageAlphaInfo.noneSkipLast
    let context = CGContext(data: nil,
                            width: Int(intSize.width),
                            height: Int(intSize.height),
                            bitsPerComponent: bitsPerComponent,
                            bytesPerRow: 0,
                            space: colorSpace,
                            bitmapInfo: bitmapInfo.rawValue)!

    context.interpolationQuality = .high
    context.draw(cgImage,
                 in: NSRect(x: 0, y: 0, width: intSize.width, height: intSize.height))
    let img = context.makeImage()!
    return NSImage(cgImage: img, size: intSize)
}

And this does indeed solve the performance hit.

Upvotes: 1

Related Questions