Suzan Cioc
Suzan Cioc

Reputation: 30127

Better converting to gray?

Is there a better way to converting (RGB) image to grayscale than

enter image description here

This way produces light intensity, which may not mark objects well for further processing. For example, if we have some hotspot or reflection, this will be depicted as noticeable object in such a grayscale.

I am experimenting with other color spaces like Lab, but hey have poor contrast.

Upvotes: 2

Views: 745

Answers (4)

Francesco Callari
Francesco Callari

Reputation: 11825

It's not that simple as asking for a recipe - you need to define what you need.

The transform you used harks back to the early days of color TV, when there was a need for a way to separately encode the luminance and chrominance in the analog broadcast signal, taking into account the fact that a lot less bandwidth was available to transmit chroma than luma. The encoding is very loosely related to the higher relative sensitivity of the cones in the human retina in the yellow-green band.

There is no reason to use it blindly. Rather, you need to clearly express what the goal of your desired transformation is, translate that goal into a (quantifiable) criterion, then find a particular transform that optimizes that criterion. The transform can be global (i.e. like the TV one you used) or adaptive (i.e. depending on the color values in a neighborhood of the current pixel), and either way it can be linear (like, again, the TV one) or not.

Upvotes: 4

Rosa Gronchi
Rosa Gronchi

Reputation: 1911

There are a few works in this field. For example this one: http://dl.acm.org/citation.cfm?id=2407754

Upvotes: 0

Bull
Bull

Reputation: 11941

A trick you can use with Lab is to just ignore the L channel, then the other two channels just give variation in color. This can be very effective if you want to find the boundaries of an object that has a bright light shining on it.

There are many other color spaces that separate brightness from color information, like Lab. Some examples are HSV, YUV, YCrCb. Just pick whichever of these works best, discard the brightness and work with two channels of color.

Lab is a 'perceptual" color space that attempts to match non-linearities in the eye. That is Lab numbers that are close together will be perceived as very similar by a human, while Lab numbers that differ greatly will be perceived as very different. RGB does not work nicely like that.


Some notes about the conversion you mentioned:

If use CV_RGB2GRAY conversion in OpenCV, it uses the coefficeints that you mentioned. However whether these are the correct numbers to use depends on the flavor of RGB you have.

Your numbers are for BT.601 primaries as used in analogue TV such as NTSC and PAL. Newer HDTV, and sRGB which is widely used in computer monitors and printers uses BT. 709 primaries, in which case the conversion should be Y = 0.2126 R + 0.7152 G + 0.0722 B, and Y here is as defined by CIE 1931. The L channel in Lab also corresponds to the CIE 1931 luminance value.

Then there is Adobe RGB, which can represent more colors than sRGB (it has a wider "gamut"). But I don't think OpenCV has a conversion for it.

The best way to convert RGB to grayscale depends on where your image comes from and what you want to do with it.

It would be worth looking at the OpenCV cvtColor() documentation.

Upvotes: 1

Boyko Perfanov
Boyko Perfanov

Reputation: 3047

Since people actually can identify the terms "shadow" and "reflection", it stands that this is a decently high level operation. In addition, a person can be "blinded" or confused due to these effects. So I will go with "No, there is no significantly better, low-level way to eliminate different luminance effects".

You can make a module that detects adjacent lightness-distorted regions (based on cues like hue and chroma, spatial factors of whether they form a "jigsaw puzzle", etc), and stitch them together.

I recommend HSV because it has worked well for me for quite reliably overcoming shadows in images.

Upvotes: 2

Related Questions