Reputation: 555
I'm using this function to detect if the touched point of a UIImageView corresponds to a transparent pixel or not.
-(BOOL)isTransparency:(CGPoint)point
{
CGImageRef cgim = self.image.CGImage;
unsigned char pixel[1] = {0};
point.x = (point.x / (self.frame.size.width / CGImageGetWidth(cgim)));
point.y = (point.y / (self.frame.size.height / CGImageGetHeight(cgim)));
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x,
-(self.image.size.height - point.y),
CGImageGetWidth(cgim),
CGImageGetHeight(cgim)),
cgim);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return alpha < 0.01;
}
Works fine if the UIImageView transform is not modified. But when user rotates the image the transform matrix is modified, this code get the image in the original position, not corresponding to the transformed image position, and I'm getting false results.
My question is how can I detect if the touched pixel of the image of the UIImageView is transparent when it is rotated?
Thanks.
EDIT: -Version 1.1-
Hi. Thanks for the answer. I've modified the code and now this version still working as previous version:
-(BOOL)isTransparency:(CGPoint)point
{
CGImageRef cgim = self.image.CGImage;
NSLog(@"Image Size W:%lu H:%lu",CGImageGetWidth(cgim),CGImageGetHeight(cgim));
unsigned char pixel[1] = {0};
point.x = (point.x / (self.frame.size.width / CGImageGetWidth(cgim)));
point.y = (point.y / (self.frame.size.height / CGImageGetHeight(cgim)));
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextTranslateCTM(context, self.image.size.width * 0.5f, self.image.size.height * 0.5f);
CGContextConcatCTM(context, self.transform);
//CGContextScaleCTM(context, [self currentPercent]/100, [self currentPercent]/100);
//CGContextRotateCTM(context, atan2f(self.transform.b, self.transform.a));
CGContextTranslateCTM(context, -self.image.size.width * 0.5f, -self.image.size.height * 0.5f);
CGContextDrawImage(context, CGRectMake(-point.x,
-(self.image.size.height - point.y),
self.image.size.width,
self.image.size.height),
cgim);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
NSLog(@"Is Transparent: %d",alpha < 0.01);
return alpha < 0.01;
}
I tried with several functions to rotate and scale the original image to current rotation and size without success. :(
Any help please?
EDIT: Version 1.2
I found a workaround to fix this problem. I decided to create a context with size equal to screen size and draw only the touched object, according with the current transform matrix.
After that I create a bitmap for this context where the image is properly drawn and pass the image to the function that works only with non transformed images.
The result is that works ok. I don't know if is the optimum way to do that, but works.
Here the code:
-(BOOL)isTransparency:(CGPoint)point
{
UIGraphicsBeginImageContext(self.superview.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextTranslateCTM(context, [self center].x, [self center].y);
CGContextConcatCTM(context, [self transform]);
CGContextTranslateCTM(context,
-[self bounds].size.width * [[self layer] anchorPoint].x,
-[self bounds].size.height * [[self layer] anchorPoint].y);
[[self image] drawInRect:[self bounds]];
CGContextRestoreGState(context);
CGImageRef image = CGBitmapContextCreateImage(context);
CGImageRef viewImage = image;
return [self isTransparentPixel:point image:image];
}
This function creates the image with the size of superview (in my case always full screen) and receives as parameter the touched point in UIKit coordinates system from the parent view.
-(BOOL)isTransparentPixel:(CGPoint)point image:(CGImageRef)cgim
{
NSLog(@"Image Size W:%lu H:%lu",CGImageGetWidth(cgim),CGImageGetHeight(cgim));
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x,
-(self.superview.frame.size.height - point.y),
CGImageGetWidth(cgim),
CGImageGetHeight(cgim)),
cgim);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return alpha < 0.01;
}
This function detects the alpha channel of a given pixel in a given image.
Thanks for all.
Upvotes: 3
Views: 1190
Reputation: 457
Apply the transform from your image view to your CGContext with a function such as CGContextConcatCTM() before drawing your image.
Upvotes: 2