Reputation: 2821
I'm currently working on an application which displays many images one after another. I don't have the luxury of using video for this unfortunately, however, I can choose the image codec in use. The data is sent from a server to the application, already encoded.
If I use PNG or JPEG for example I can convert the data I receive into a UIImage using [[UIImage alloc] initWithData:some_data]
. When I use a raw byte array, or another custom codec which has to decode to a raw byte array first, I have to create a bitmap context, then use CGBitmapContextCreateImage(bitmapContext)
which gives a CGImageRef, which then is fed into [[UIImage alloc] initWithImage:cg_image]
. This is much slower.
The above chart (time is measured in seconds) is the time it takes to perform the conversion from NSData to UIImage. PNG, JPEG, BMP, and GIF are all approximately the same. Null is simply not bothering with the conversion and returning nil instead. Raw is a raw RGBA byte array which is converted using the bitmap context method. The custom one decompresses into a Raw format and then does the same thing. LZ4 is the raw data, compressed using the LZ4 algorithm and so it also runs through the bitmap context method.
PNG images for example, are simply bitmapped images which are compressed. This decompression and then render takes less time than my render for Raw images. iOS must be doing something behind the scenes to make this faster.
If we look at the chart of how long it takes to convert each type as well as how long it takes to draw (to a graphics context) we get the following:
We can see that most images take very different times to convert, but are fairly similar in drawing times. This rules out any performance boost of UIImage being lazy and converting only when needed.
My question is essentially: is the faster speeds for well known codecs something I can exploit? Or, if not, is there another way I can render my Raw data faster?
Edit: For the record, I am drawing these images on top of another UIImage whenever I get a new one. It may be that there is an alternative which is faster which I am willing to look into. However, OpenGL is not an option unfortunately.
Further edit: This question is fairly important and I would like the best possible answer. The bounty will not be awarded until the time expires to ensure the best possible answers are given.
Final edit: My question was why isn't decompressing and drawing a raw RGBA array faster than drawing a PNG for example since PNG has to decompress to a RGBA array and then draw. The results are that it is in fact faster. However, this only appears to be the case in release builds. Debug builds are not optimised for this, but the UIImage code which runs behind the scenes clearly is. By compiling as a release build RGBA array images were much faster than other codecs.
Upvotes: 3
Views: 1017
Reputation: 33359
[UIImage initWithData:]
does not copy any memory around. It just leaves the memory there where it is, then when you draw it dumps the memory on the GPU to do it's stuff - without the CPU or RAM being involved much in decoding the image. It's all being done in the GPU's dedicated hardware.
Remember, Apple designs their own CPU/GPU by licensing other manufacturer's technology and customising to suit their needs. They've got more than a thousand CPU hardware engineers working on just a single chipset, and efficiently processing images is a priority.
Your lower level code is probably doing lots of memory copying and math, and that's why it's so much slower.
UIImage
and NSData
are very intelligent high performance APIs that have been developed over decades by people who truly understand (or even built) the hardware and kernel. They're much more efficient than you can achieve with lower level APIs unless you're prepared to write many thousands of lines of code and spend months or even years testing and tweaking to get better performance.
NSData
for example can effortlessly work with terabytes of data with good performance even though only a few gigabytes of RAM might be available — used correctly it will seamlessly combine RAM and SSD/HDD storage often with performance similar to what you'd get if you actually had terabytes of RAM, and UIImage
can detect low memory situations and free almost all it's RAM without any code on your behalf — if it knows the URL the image was originally loaded from (works better for file:// URLs than http:// URLs).
If you can do what you want with UIImage
and NSData
, then you should. Only go with the lower level APIs if you have a feature you can't otherwise implement.
Upvotes: 0
Reputation: 81868
When measuring performance it's important to measure the full pipeline in order to find the bottleneck.
In your case that means you cannot isolate UIImage
creation. You will have to include image display—otherwise you fall into the trap of measuring only part of what you're interested.
UIImage is not a thin wrapper around bitmap data but a rather complex and optimized system. The underlying CGImage
can, for example, be only a reference to some compressed data on disk. That's why initializing a UIImage
using initWithContentsOfFile:
or initWithData:
is fast. There are more hidden performance optimization in the ImageIO and Quartz frameworks in iOS that all will add to your measuring.
The only reliable way to get solid measurements is to do what you really want to do (getting data from network or disk, create a UIImage somehow, and display it on screen for at least one frame).
Here are some considerations you should be aware of:
Apple's graphics frameworks go to great lengths to perform the minimal work necessary. If an image is not displayed it might never be decompressed.
If an image is displayed in a lower resolution than it's original pixels, it might be only partly decompressed (especially possible with JPEGs). This can be a good thing to help with optimization but of course can't be used when creating the images from a CGBitmapContext
of full image resolution. So don't do this unless necessary.
When measuring with Instruments you might not see all relevant CPU cycles. Decompression of images can happen in backboardd
(the kind-of-window-server used in iOS).
Using uncompressed images might seem like the fastest possible idea. But this does ignore the fact that memory might be the bottleneck and less data (compressed images) can help with that.
Conclusion:
Your aim should be to find the bottleneck for your real scenario. So don't test using made-up test data and contrived code. You might end up optimizing performance for a code path not taken in your app.
When you change your testing code to measuring the full pipeline it would be nice if you could update your question with the results.
Upvotes: 1
Reputation: 10938
UIImage
uses and abstract internal representation best suited for the actual source, thus the good performance. PNG images are not converted to bitmap and then displayed by UIImage
but a more performant drawing.
On the other hand bitmaps are the biggest and less efficient and heavy way to handle images so there's not much you can do bout it besides converting them to another format.
Upvotes: 0