Reputation: 21140
I am trying to replicate the behavior of CGContextClipToMask on iOS without any luck so far. Does anyone know how CGContextClipToMask works internally? I have read the documentation and it says it simply multiplies the image alpha value by the mask alpha value, so that is what I am doing in my custom function, however when I draw the result image using normal blending on to a CGContext multiple times, the result gets darker and darker, whereas with CGContextClipToMask the result is correct and does not get darker.
My theory is that CGContextClipToMask somehow uses the destination context, in addition to the image and mask to produce a correct result, but I just don't know enough about core graphics to be sure.
I've also read this question:
How to get the real RGBA or ARGB color values without premultiplied alpha?
and it's possible I am running into this problem, but then how does CGContextClipToMask get around the problem of precision loss with 8 bit alpha?
Upvotes: 1
Views: 349
Reputation: 21140
I found the problem. When multiplying the mask, I had to do a ceilf call on the RGB values like so:
float alphaPercent = (float)maskAlpha / 255.0f;
pixelDest->A = maskAlpha;
pixelDest->R = ceilf((float)pixelDest->R * alphaPercent);
pixelDest->G = ceilf((float)pixelDest->G * alphaPercent);
pixelDest->B = ceilf((float)pixelDest->B * alphaPercent);
Amazingly, this solves the problem...
Upvotes: 1
Reputation: 69027
I personally have no idea about the internals of CGContextClipToMask
, but maybe you can find out something interesting by having a look at how it is implemented in GNUStep.
PS:
thinking more about it, when you write "multiplies the image alpha value by the mask alpha value", there is something that strikes my attention.
Indeed, as far as I know, a mask on iOS has no alpha channel and all my attempts to use alpha-channeled masks have given wrong results. Could this allow you to move further a bit?
Maybe the trick is multiplying the image by the mask (which is defined in the gray color space, and has got just one component), and leave the image alpha channel unchanged...
how does that sounds to you?
Upvotes: 0