Reputation: 229361
I'm getting images from the camera and I have to account for them being in different orientations. The first approach I took was to use this code snippet to create a properly-rotated UIImage
. That calculates a CGAffineTransform
and then draws the image using CGContextDrawImage
.
However, this operation took 2-3 seconds to complete.
The second approach I took was to just calculate by how many degrees I'd have to rotate the image, and whether I have to flip it, and then to apply those rotations before drawing the UIImage
. This approach took a negligible amount of time.
To be specific, I'm using Cocos2D, so in either case I converted the UIImage
to a CCTexture2D
, created a sprite with that texture, and then drew it. In the first case, I created a new UIImage
and had an unrotated CCSprite
. In the second, I used the same UIImage
but had a rotated sprite.
Why is the second approach so much faster? The net result is the same: a rotated image. It seems either way the same bits and bytes have to get manipulated in the same way to get this result.
Upvotes: 3
Views: 555
Reputation: 3271
Have a look at this question, experimentation, and answer. UIImage
may be doing some of these optimizations already in the background (Since we don't know the implementation details, there is no real way to know).
Upvotes: 1