RandUs
RandUs

Reputation: 21

OpenCV not detecting eyes correctly

I need it to detect eyes (separately, both open or closed), crop them and save them as images. It works but not in every photo.

I tried everything I could think of. I tried different values for scaleFactor and minNeighbors, also tried to add min and max size for the eyes detected (did not make much difference).

I still get issues. It sometimes detects more than 2 eyes, sometimes only 1. Sometimes it mistakes even nostrils for eyes :D . Especially if the eyes are closed, the errors are very often.

What can I do to improve accuracy? This is very important for the rest of my program.

  face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
  eyes_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_eye.xml')

  faces_detected = face_cascade.detectMultiScale(img, scaleFactor=1.1, minNeighbors=5)

  (x, y, w, h) = faces_detected[0]
  cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 1);

  eyes = eyes_cascade.detectMultiScale(img[y:y + h, x:x + w], scaleFactor=1.1, minNeighbors=5)
  count = 1
  for (ex, ey, ew, eh) in eyes:
      cv2.rectangle(img, (x + ex, y + ey), (x + ex + ew, y + ey + eh), (255, 255, 255), 1)
      crop_img = img[y + ey:y + ey + eh, x + ex:x + ex + ew]
      s1 = 'Images/{}.jpg'.format(count)
      count = count + 1
      cv2.imwrite(s1, crop_img)

Upvotes: 2

Views: 2338

Answers (1)

Alex
Alex

Reputation: 1111

For face detection stuff, my go-to would be dlib (Python API). It is more involved and slower but it results in much higher quality results.

Step 1 is converting from OpenCV to dlib:

img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

Next, you can use the dlib face detector to detect the faces (second argument means to upsample by 1x):

detector = dlib.get_frontal_face_detector()
detections = detector(img, 1)

Then find facial landmarks using a pre-trained 68 point predictor:

sp = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
faces = dlib.full_object_detections()
for det in detections:
    faces.append(sp(img, det))

Note: From here you could get face chips dlib.get_face_chip(img, faces[0])

Now you can get bounding boxes and the locations of the eyes:

bb = faces[0].rect

right_eye = [faces[0].part(i) for i in range(36, 42)]
left_eye = [faces[0].part(i) for i in range(42, 48)]

Here are all the mappings according to pyimagesearch:

mouth: 48 - 68
right_eyebrow: 17 - 22
left_eyebrow: 22 - 27
right_eye: 36 - 42
left_eye: 42 - 48
nose: 27 - 35
jaw: 0 - 17

Here's the results and the code I put together: Example 1 Example 2

import dlib
import cv2

# Load image
img = cv2.imread("monalisa.jpg")

# Convert to dlib
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

# dlib face detection
detector = dlib.get_frontal_face_detector()
detections = detector(img, 1)

# Find landmarks
sp = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
faces = dlib.full_object_detections()
for det in detections:
    faces.append(sp(img, det))

# Bounding box and eyes
bb = [i.rect for i in faces]
bb = [((i.left(), i.top()),
       (i.right(), i.bottom())) for i in bb]                            # Convert out of dlib format

right_eyes = [[face.part(i) for i in range(36, 42)] for face in faces]
right_eyes = [[(i.x, i.y) for i in eye] for eye in right_eyes]          # Convert out of dlib format

left_eyes = [[face.part(i) for i in range(42, 48)] for face in faces]
left_eyes = [[(i.x, i.y) for i in eye] for eye in left_eyes]            # Convert out of dlib format

# Display
imgd = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)             # Convert back to OpenCV
for i in bb:
    cv2.rectangle(imgd, i[0], i[1], (255, 0, 0), 5)     # Bounding box

for eye in right_eyes:
    cv2.rectangle(imgd, (max(eye, key=lambda x: x[0])[0], max(eye, key=lambda x: x[1])[1]),
                        (min(eye, key=lambda x: x[0])[0], min(eye, key=lambda x: x[1])[1]),
                        (0, 0, 255), 5)
    for point in eye:
        cv2.circle(imgd, (point[0], point[1]), 2, (0, 255, 0), -1)

for eye in left_eyes:
    cv2.rectangle(imgd, (max(eye, key=lambda x: x[0])[0], max(eye, key=lambda x: x[1])[1]),
                        (min(eye, key=lambda x: x[0])[0], min(eye, key=lambda x: x[1])[1]),
                        (0, 255, 0), 5)
    for point in eye:
        cv2.circle(imgd, (point[0], point[1]), 2, (0, 0, 255), -1)

cv2.imwrite("output.jpg", imgd)

cv2.imshow("output", imgd)
cv2.waitKey(0)

Upvotes: 3

Related Questions