Reputation: 1193
I'm doing some basic face detection in opencv, and every example code I look at they convert to grayscale and then perform face detect on the grayscale...
At first i thought it was for performance reasons, but I did a comparision and found no significant performance boost.
This code:
faceCascade.detectMultiScale(*image, *faces, 1.1, 3, CASCADE_SCALE_IMAGE, Size(60,60));
Performs about the same as this code:
Mat gray;
cvtColor(*image, gray, COLOR_BGR2GRAY);
faceCascade.detectMultiScale(gray, *faces, 1.1, 3, CASCADE_SCALE_IMAGE, Size(60,60));
So this begs the question, why does everyone convert to gray scale in opencv?
Thanks
Upvotes: 6
Views: 4438
Reputation: 1193
In nutshell:
It is needed, but if you dont do it, then OpenCv will do it for you.
if you are passing the Mat-- you are passing the CV_8U (channel) information-- they can just do it for you.. this is smart.
Why have developers write three lines of code, when you can easily have them write one?
None the less, thank you for the answer and the digging.
some notes for other developers:
cvtColor takes about 3ms on a RPI3 to convert a 640x480 BGR to grayscale.
If you are doing multiple openCv methods that require grayscale, then you can have better performance if you convert to gray scale once-- and pass the grayscale into those openCv methods.
Upvotes: 1
Reputation: 3091
Everyone converts to grayscale because many functions expect grayscale. From OpenCV documentation faceCascade.detectMultiScale()
also expects grayscale:
CascadeClassifier::detectMultiScale(const Mat& image,...)
image – Matrix of the type CV_8U containing an image where objects are detected.
And CV_8U
is a 1-channel image, as opposed to CV_8U3
, which would be a 3-channel image.
EDIT: If you don't convert it to grayscale before passing to the method, OpenCV will do it for you under the hood.
Upvotes: 6