heral
heral

Reputation: 63

Core Image GPU performance too slow

I was playing with Core Image Filters and encountered a strange benchmark. With the following 2 functions; one processing heavy math on cpu and other on gpu as the name suggests, cpu performance is about a hundred times faster than the gpu performance. I tried "CILineOverlay" and "CIPhotoEffectProcess" filters and measured the transforming time with DispatchTime.now() method. Am I doing something wrong? Or is it related to deprecated opengl support?

private func apply_cpu(to image:UIImage?, appleFilterName:String) -> UIImage?    {

    guard let image = image, let cgimg = image.cgImage else {
        return nil
    }

    let coreImage = CIImage(cgImage: cgimg)

    let filter = CIFilter(name: "CISepiaTone")
    filter?.setValue(coreImage, forKey: kCIInputImageKey)
    filter?.setValue(0.5, forKey: kCIInputIntensityKey)

    if let output = filter?.value(forKey: kCIOutputImageKey) as? CIImage {
        return UIImage(ciImage: output)
    }

    else {
        return nil
    }
}


private func apply_gpu(to image:UIImage?, appleFilterName:String)-> UIImage?  {

    guard let image = image, let cgimg = image.cgImage else {

        return nil
    }


    let coreImage = CIImage(cgImage: cgimg)


    let start = DispatchTime.now()

    let openGLContext = EAGLContext(api: .openGLES3)
    let context = CIContext(eaglContext: openGLContext!) 


    guard let filter = CIFilter(name: appleFilterName)   else {
        return nil
    }

    if  filter.inputKeys.contains(kCIInputImageKey) {
        filter.setValue(coreImage, forKey: kCIInputImageKey)
    }

    if  filter.inputKeys.contains(kCIInputIntensityKey) {


    }


    if let output = filter.value(forKey: kCIOutputImageKey) as? CIImage {
        let cgimgresult = context.createCGImage(output, from: output.extent)
        return UIImage(cgImage: cgimgresult!)

    }

        return nil

}

}

Upvotes: 1

Views: 1741

Answers (1)

user7014451
user7014451

Reputation:

From the comments, the issue was where the performance time tests were being done. I can't stress this enough when testing CoreImage filters:

Use a real device, not the simulator.

My experience is that it can take "seconds to minutes" in the simulator where in any iPhone 5 or later device using iOS 9+ (maybe earlier too, both ways) will be "near real-time to milliseconds". If you aren't seeing this on a real device? There is something wrong in the code.

I've not found any tutorials, any books, anything at all that stresses this single point. My best resource - Simon Gladman who wrote the excellent Core Image for Swift (be careful, it's Swift 2) - explains a lot of what I believe is going on, but never really stressed why it is the case.

An iOS device uses the GPU. A simulator does not.

I'm sure it's more complex than that and involves optimization. But the thing is this - while you can use CoreImage in macOS, if you are using the simulator you are targeting iOS. So where a macOS project using CoreImage may perform well, if it's an iOS project you need to use a real device to get a real feel for performance.

Upvotes: 2

Related Questions