Reputation: 10077
I'm currently writing an iOS backend for a cross-platform program whose platform-independent engine writes all of its graphics into 32-bit pixel buffers, in RGBA order. The alpha byte isn't used. The graphics are always opaque so I don't need alpha blending.
What is the most efficient option to draw and scale these pixel buffers to my CGContextRef
inside my drawRect
method? The pixel buffers are usually only 320x240 pixels and need to be scaled to completely fill my view's dimensions, e.g. 1024x768 on non-Retina iPads and 2048x1536 on Retina iPads. This is a whole lot of work so it's best done using the GPU. But how can I force iOS to draw and scale using the GPU without using OpenGL?
I've tried using CGContextDrawImage()
but this is really slow, probably because everything is done using the CPU.
I've also had a look at the CIImage
APIs because these are apparently GPU optimized but the problem is that CIImage
objects are immutable so I'd have to constantly create new CIImage
objects for each frame I need to draw which will probably kill the performance as well.
I could go with OpenGL of course but I'd like to get some feedback on whether there is an easier solution to get what I want here.
Thanks for any ideas!
Upvotes: 2
Views: 359
Reputation: 10077
To answer my own question, the easiest way to do this is to simply create a CGImageRef
from the pixel buffer using CGDataProviderCreateWithData()
and then CGImageCreate()
. This CGImageRef
can then be set as the view's layer, like so:
view.layer.contents = (id) myImage;
The image will then be automatically scaled (with GPU acceleration) to fit the UIView
's frame if contentMode
is set to UIViewContentModeScaleToFill
(which is the default).
Thanks to David Duncan from the Cocoa mailing list for suggesting this solution!
Upvotes: 1