Unknown Coder
Unknown Coder

Reputation: 6741

Monotouch iOS Recognize colors from a picture?

I don't know if this is possible with Monotouch so I thought I'd ask the experts. Let's say I want to be able to take a picture of a painted wall and recognize the general color from it - how would I go about doing that in C#/Monotouch?

I know I need to capture the image and do some image processing but I'm more curious about the dynamics of it. Would I need to worry about lighting conditions? I assume the flash would "wash out" my image, right?

Also, I dont need to know exact colors, I just need to know the general color family. I dont need to know a wall is royal blue, I just need it to return "blue". I dont need to know hunter green, I just need it to return "green". I've never done that with image processing.

Upvotes: 1

Views: 284

Answers (2)

James Holderness
James Holderness

Reputation: 23011

The code below relies on the .NET System.Drawing.Bitmap class and the System.Drawing.Color class, but I believe these are both supported in MonoTouch (at least based on my reading of the Mono Documentation).

So assuming you have an image in a System.Drawing.Bitmap object named bmp. You can obtain the average hue of that image with code like this:

float hue = 0;
int w = bmp.Width;
int h = bmp.Height;               
for (int y = 0; y < bmp.Height; y++) {
  for (int x = 0; x < bmp.Width; x++) {
    Color c = bmp.GetPixel(x, y);
    hue += c.GetHue();
  }
}
hue /= (bmp.Width*bmp.Height);

That's iterating over the entire image which may be quite slow for a large image. If performance is an issue, you may want to limit the pixels evaluated to a smaller subsection of the image (as suggested by juhan_h), or just use a smaller image to start with.

Then given the average hue, which is in the range 0 to 360 degrees, you can map that number to a color name with something like this:

String[] hueNames = new String[] {
  "red","orange","yellow","green","cyan","blue","purple","pink"
};
float[] hueValues = new float[] {
  18, 54, 72, 150, 204, 264, 294, 336
};                            

String hueName = hueNames[0];
for (int i = 0; i < hueNames.Length; i++) {
  if (hue < hueValues[i]) {
    hueName = hueNames[i];
    break;
  }
}         

I've just estimated some values for the hueValues and hueNames tables, so you may want to adjust those tables to suit your requirements. The values are the point at which the color appears to change to the next name (e.g. the dividing line between red and orange occurs at around 18 degrees).

To get an idea of the range of colors represent by the hue values, look at the color wheel below. Starting at the top it goes from red/orange (around 0° - north) to yellow/green (around 90° - east), to cyan (around 180° - south), to blue/purple (around 270° - west).

Color wheel

You should note, however, that we are ignoring the saturation and brightness levels, so the results of this calculation will be less than ideal on faded colors and under low light conditions. However, if all you are interested in is the general color of the wall, I think it might be adequate for your needs.

Upvotes: 2

juhan_h
juhan_h

Reputation: 4021

I recently dealt with shifting white balance on iOS (original question here: iOS White point/white balance adjustment examples/suggestions) which included a similar problem. I can not give you code samples in C# but here are the steps that I would take:

  1. Capture the image
  2. Decide what point/part of the image is of interest (the smaller the better)
  3. Calculate the "color" of that point of the image
  4. Convert the "color" to human readable form (I guess that is what you need?)

To accomplish step #2 I would either let the user choose the point or take the point to be in the center of the image, because that is usually the place to which the camera is actually pointed.

How to accomplys step #3 depends on how big is the area chosen in step #2. If the area is 1x1 pixels then you render it in RGB and get the component (ie red green and blue) values from that rendered pixel. If the area is larger then you would need to get the RGB values of each of the pixels contained in that area and average them. If you only need a general color this would be mostly it. But if you need to compensate for lighting conditions the problem gets very much more complicated. To compensate for lighting (ie White Balancing) you need to do some transformations and some guesses in which conditions the photo was taken. I will not go into details (I wrote my Bachelors thesis on those details) but Wikipedias article on White Balance is a good starting point. It is also worth to note that the solution to White Balancing problem will always be subjective and dependent on the guesses made in which light the photo was taken (at least as far as I know).

To accomplish step #4 you should search for tables that map RGB values to human-readable colors. I have not had the need for these kinds of tables, but I am sure they exist somwhere on the Internet.

Upvotes: 1

Related Questions