Albert Renshaw
Albert Renshaw

Reputation: 17902

Detect most black pixel on an image - objective-c iOS

I have an image! It's been so long since I've done pixel detection, I remember you have to convert the pixels to an array somehow and then find the width of the image to find out when the pixels reach the end of a row and go to the next one and ahh, lots of complex stuff haha! Anyways I now have no clue how to do this anymore but I need to detect the left-most darkest pixel's x&y coordinates of my image named "image1"... Any good starting places?

Upvotes: 0

Views: 1071

Answers (1)

brandonbocklund
brandonbocklund

Reputation: 605

Go to your bookstore, find a book called "iOS Developer's Cookbook" by Erica Sadun. Go to page 378-ish and there are methods for pixel detection there. You can look in this array of RGB values and run a for loop to sort and find the pixel that has the smallest sum of R, G, and B values (this will be 0-255) that will give you the pixel closest to black. I can also post the code if needed. But the book is the best source as it gives methods and explanations.

These are mine with some changes. The method name remains the same. All I changed was the image which basically comes from an image picker.

-(UInt8 *) createBitmap{
if (!self.imageCaptured) {
        NSLog(@"Error: There has not been an image captured.");
        return nil;
    }
    //create bitmap for the image
    UIImage *myImage = self.imageCaptured;//image name for test pic
    CGContextRef context = CreateARGBBitmapContext(myImage.size);
    if(context == NULL) return NULL;
    CGRect rect = CGRectMake(0.0f/*start width*/, 0.0f/*start*/, myImage.size.width /*width bound*/, myImage.size.height /*height bound*/); //original
//    CGRect rect = CGRectMake(myImage.size.width/2.0 - 25.0 /*start width*/, myImage.size.height/2.0 - 25.0 /*start*/, myImage.size.width/2.0 + 24.0 /*width bound*/, myImage.size.height/2.0 + 24.0 /*height bound*/); //test rectangle

    CGContextDrawImage(context, rect, myImage.CGImage);
    UInt8 *data = CGBitmapContextGetData(context);
    CGContextRelease(context);    
    return data;
}
CGContextRef CreateARGBBitmapContext (CGSize size){

    //Create new color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    if (colorSpace == NULL) {
        fprintf(stderr, "Error allocating color space\n");
        return NULL;
    }
    //Allocate memory for bitmap data
    void *bitmapData = malloc(size.width*size.height*4);
    if(bitmapData == NULL){
        fprintf(stderr, "Error: memory not allocated\n");
        CGColorSpaceRelease(colorSpace);
        return NULL;
    }
    //Build an 8-bit per channel context
    CGContextRef context = CGBitmapContextCreate(bitmapData, size.width, size.height, 8, size.width*4, colorSpace, kCGImageAlphaPremultipliedFirst);
    CGColorSpaceRelease(colorSpace);
    if (context == NULL) {
        fprintf(stderr, "Error: Context not created!");
        free(bitmapData);
        return NULL;
    }
    return context;

}
NSUInteger blueOffset(NSUInteger x, NSUInteger y, NSUInteger w){
    return y*w*4 + (x*4+3);
}
NSUInteger redOffset(NSUInteger x, NSUInteger y, NSUInteger w){
    return y*w*4 + (x*4+1);
}

The method on the bottom, redOffset, will get you the Red value in the ARGB (Alpha-Red-Green-Blue) scale. To change what channel in the ARGB you are looking at, change the value added to the x variable in the redOffset function to 0 to find alpha, keep it at 1 to find red, 2 to find green, and 3 to find blue. This works because it just looks at an array made in the methods above and the addition to x accounts for the index value. Essentially, use methods for the three colors (Red, green, and blue) and find the summation of those for each pixel. Whichever pixel has the lowest value of red, green, and blue together is the most black.

Upvotes: 2

Related Questions