Gepeto
Gepeto

Reputation: 316

Detect if image is color, grayscale or black and white using Python

I extract pages images from a PDF file in jpeg format and I need to determine if each image is much more grayscale, color or black and white (with a tolerance factor).

I have found some ways to work with color detection with PIL ( here and here ) but I can't figure out how to answer this simple (visual) question : is it much more black and white, color or grayscale image ?

I prefer working with Python and PIL for this part but I could use too OpenCV if someone has a clue (or solution).

Upvotes: 13

Views: 34655

Answers (7)

Gepeto
Gepeto

Reputation: 316

I have found a way to guess this with the PIL.ImageStat module. Thanx to this post for the monochromatic determination of an image.

from PIL import Image, ImageStat

MONOCHROMATIC_MAX_VARIANCE = 0.005
COLOR = 1000
MAYBE_COLOR = 100

def detect_color_image(file):
    v = ImageStat.Stat(Image.open(file)).var
    is_monochromatic = reduce(lambda x, y: x and y < MONOCHROMATIC_MAX_VARIANCE, v, True)
    print file, '-->\t',
    if is_monochromatic:
        print "Monochromatic image",
    else:
        if len(v)==3:
            maxmin = abs(max(v) - min(v))
            if maxmin > COLOR:
                print "Color\t\t\t",
            elif maxmin > MAYBE_COLOR:
                print "Maybe color\t",
            else:
                print "grayscale\t\t",
            print "(",maxmin,")"
        elif len(v)==1:
            print "Black and white"
        else:
            print "Don't know..."

The COLOR and MAYBE_COLOR constant are quick switches to find the differences between color and grayscale images but it is not safe. As an example, I have several JPEG images that are view as color but in real are grayscale with some color artefacts due to a scan process. That's why I have another level to note really sure color image from the others.

If someone has a better approach, let me know.

Upvotes: 4

Jeru Luke
Jeru Luke

Reputation: 21203

This solution is inspired by TomB's post. There is a slight change. Tom's post is based on RGB color space, while mine is based on the LAB color space. To know more about LAB space please go through this post and the mentioned link within.

Advantage of using LAB space

LAB has 3 channels just like RGB. But only 2 channels have color information (A and B), while L channel represents brightness value. Unlike RGB where we have to analyze all three channels, using LAB we can analyze only 2 channels. The benefit will be apparent when one has to analyze a large number of images.

Method:

The method is no different compared to Tom's post. Here we will:

  • obtain A and B channels of the image
  • find the mean value of the difference between them
  • determine a threshold above which all images can be labelled as color.

Code

Images used:

Gray image:

enter image description here

Color image:

enter image description here

einstein_img = cv2.imread('Einstein.jpg')
flower_img = cv2.imread('flower.jpg')

# convert to LAB space
elab = cv2.cvtColor(einstein_img, cv2.COLOR_BGR2LAB)
flab = cv2.cvtColor(flower_img, cv2.COLOR_BGR2LAB)

# split the channels
el, ea, eb = cv2.split(elab)
# obtain difference between A and B channel at every pixel location
de = abs(ea-eb)
# find the mean of this difference
mean_e = np.mean(de)

# same as above for the color image:
fl, fa, fb = cv2.split(flab)
df = abs(fa-fb)
mean_f = np.mean(df)

# for gray image
print(mean_e) 

0.0

# for color image
print(mean_f)

83.5455

Why does this work?

This works because images that contain predominantly white, gray and black do not show much variation in the dual color channels of LAB space. It has been designed to segment/isolate dominant colors well. But can work well for less colored images also.

A and B channel of colored flower image placed beside each other:

enter image description here

Since there are differences between the two at each pixel we obtain a non-zero mean value.

A and B channel of gray Einstein image placed beside each other:

enter image description here

Here however we obtain no mean value.

Note: Although 0 is the ideal mean value, there may be cases where non-zero value may appear for gray image. The value though won't be as large as a color image. One can define a threshold in such scenarios.

Upvotes: 1

SOUVIK SAHA
SOUVIK SAHA

Reputation: 11

import numpy as np
import cv2
import imutils


def image_colorfulness(image):
    (B, G, R) = cv2.split(image.astype("float"))
    rg = np.absolute(R - G)
    yb = np.absolute(0.5 * (R + G) - B)
    (rbMean, rbStd) = (np.mean(rg), np.std(rg))
    (ybMean, ybStd) = (np.mean(yb), np.std(yb))
    stdRoot = np.sqrt((rbStd ** 2) + (ybStd ** 2))
    meanRoot = np.sqrt((rbMean ** 2) + (ybMean ** 2))
    return stdRoot + (0.3 * meanRoot)


image = cv2.imread('green.JPG')
image = imutils.resize(image, width=250)
C  = image_colorfulness(image)
#set a threshold 
print(C)
if C > 10:
    print('its a color image...')
elif 8 < C <= 10:
    print('Not Sure...')
else:
    print('Black and white image...')
cv2.putText(image, "{:.2f}".format(C), (40, 40), cv2.FONT_HERSHEY_SIMPLEX, 1.4, (0, 255, 0), 3)

cv2.imshow('im',image)
cv2.waitKey(0)

Upvotes: 1

TomB
TomB

Reputation: 1110

We use this simple function to determine the color-factor of an image.

# Iterate over all Pixels in the image (width * height times) and do this for every pixel:
{
    int rg = Math.abs(r - g);
    int rb = Math.abs(r - b);
    int gb = Math.abs(g - b);
    diff += rg + rb + gb;
}

return diff / (height * width) / (255f * 3f);

As gray values have r-g = 0 and r-b = 0 and g-b = 0 diff will be near 0 for grayscale images and > 0 for colored images.

Upvotes: 12

Tim
Tim

Reputation: 5681

I personally prefer the answer of TomB. This is not a new answer, I just want to post the Java version:

private Mat calculateChannelDifference(Mat mat) {   

    // Create channel list:
    List<Mat> channels = new ArrayList<>();

    for (int i = 0; i < 3; i++) {
        channels.add(new Mat());
    }

    // Split the channels of the input matrix:
    Core.split(mat, channels);

    Mat temp = new Mat();

    Mat result = Mat.zeros(mat.size(), CvType.CV_8UC1);

    for (int i = 0; i < channels.size(); i++) {

        // Calculate difference between 2 successive channels:
        Core.absdiff(channels.get(i), channels.get((i + 1) % channels.size()), temp);

        // Add the difference to the result:
        Core.add(temp, result, result);
    }

    return result;
}

The result is the difference as an matrix, this way you could apply some threshold and even detect shapes. If you want the result as a single number, you will just have to calculate the average value. This can be done using Core.mean()

Upvotes: 1

Noah Whitman
Noah Whitman

Reputation: 231

I tried Gepeto's solution and it has a lot of false positives since the color grand variances can be similar just by chance. The correct way to do this is to calculate the variance per pixel. Shrink down the image first so you don't have to process millions of pixels.

By default this function also uses a mean color bias adjustment, which I find improves the prediction. A side effect of this is that it will also detect monochrome but non-grayscale images (typically sepia-toned stuff, the model seems to break down a little in detecting larger deviations from grayscale). You can separate these out from true grayscale by thresholding on the color band means.

I ran this on a test set of 13,000 photographic images and got classification with 99.1% precision and 92.5% recall. Accuracy could probably be further improved by using a nonlinear bias adjustment (color values must be between 0 and 255 for example). Maybe looking at median squared error instead of MSE would better allow e.g. grayscale images with small color stamps.

from PIL import Image, ImageStat
def detect_color_image(file, thumb_size=40, MSE_cutoff=22, adjust_color_bias=True):
    pil_img = Image.open(file)
    bands = pil_img.getbands()
    if bands == ('R','G','B') or bands== ('R','G','B','A'):
        thumb = pil_img.resize((thumb_size,thumb_size))
        SSE, bias = 0, [0,0,0]
        if adjust_color_bias:
            bias = ImageStat.Stat(thumb).mean[:3]
            bias = [b - sum(bias)/3 for b in bias ]
        for pixel in thumb.getdata():
            mu = sum(pixel)/3
            SSE += sum((pixel[i] - mu - bias[i])*(pixel[i] - mu - bias[i]) for i in [0,1,2])
        MSE = float(SSE)/(thumb_size*thumb_size)
        if MSE <= MSE_cutoff:
            print "grayscale\t",
        else:
            print "Color\t\t\t",
        print "( MSE=",MSE,")"
    elif len(bands)==1:
        print "Black and white", bands
    else:
        print "Don't know...", bands

Upvotes: 23

scap3y
scap3y

Reputation: 1198

You can use the cv::Mat::channels() operator and that can tell you whether it is a "grayscale" (i.e., 2 channel) or "color" (i.e., 3-channel) image. For black and white, you will need set deeper tests based on grayscale since the definition varies.

Upvotes: -3

Related Questions