wenxi
wenxi

Reputation: 133

Tips on building a program detecting pupil in images

I am working on a project that aims to build a program which automatically gives a relatively accurate detection of pupil region in eye pictures. I am currently using simplecv in Python, given that Python is easier to experiment with. Since I just started, the eye pictures I am working with are fairly standardized. However, the size of iris and pupil as well as the color of iris can vary. And the position of the eye can shift a little among pictures. Here's a picture from wikipedia that is similar to the pictures I am using: "MyStrangeIris.JPG" by Epicstessie is licensed under CC BY-SA 3.0

I have tried simple thresholding. Since different eyes have different iris colors, a fixed thresholding would not work on all pictures.

In addition, I tried simplecv's build-in sobel and canny edge detection, it's not working especially for eyes with darker iris. I also doubt that sobel or canny alone can solve the problem, given sometimes there are noises on the edge of the pupil (e.g., reflection of eyelash)

I have entry-level knowledge about image processing and machine learning. Right now, I am thinking about three possibilities:

  1. Do a regression on the threshold value base on some variables
  2. Make a specific mask only for edge detection for the pupil
  3. classification on each pixel (this looks like lots of work to build the training set)

Am I on the right track? I would like to reach out to anyone with more experience on this type of problem. Any tips/suggestions are more than welcome. Thanks!

Upvotes: 0

Views: 2598

Answers (4)

Maverick
Maverick

Reputation: 1

I think you can try Active Shape Modelling or if you want a really feature rich modelling and do not care about the time it takes execute the algorithm you can try Active appearance modelling. You might want to look into these papers for better understanding:

Active Shape Models: Their Training and Application

Statistical Models of Appearance for Computer Vision - In Depth

Upvotes: 0

Ankit Dixit
Ankit Dixit

Reputation: 750

I have written a small matlab code for image (link you have provided), function which i have used is hough transform for circle detection, which has also implemented in opencv, so porting will not create problem, i just want to know that i am on write way or not.

my result and code is as follows:

        clc
        clear all
        close all

        im = imresize(imread('irisdet.JPG'),0.5);

        gray = rgb2gray(im);

        Rmin = 50; Rmax = 100;
        [centersDark, radiiDark] = imfindcircles(gray,[Rmin Rmax],'ObjectPolarity','dark');

        figure,imshow(im,[])
        viscircles(centersDark, radiiDark,'EdgeColor','b');            

Input Image:

enter image description here

Result of Algorithm:

enter image description here

Thank You

Upvotes: 1

Elad Joseph
Elad Joseph

Reputation: 3068

I think that for start you should put aside the machine learning. You have so much more to try in "regular" computer vision.

You need to try and describe a model for your problem. A good way to do this is to sit and think how you as a person detect iris. For example, i can think of:

  1. It is near the center of image.
  2. It is is Brown/green/blue circle, with distinct black center, surrounded by mostly white ellipse.
  3. You have a skin color around the white ellipse.
  4. It can't be too small or too large (depends on your images..)

After you build your model, try to find better ways to find these features. Hard to point on specific stuff, but you can start from: HSV color space, Correlation, Hough transform, Morphological operations..

Only after you feel you have exhausted all conventional tools, start thinking on features extraction and machine learning..

And BTW, because you are not the first person that try to detect iris, you can look at other projects for ideas.

Upvotes: 2

Flying_Banana
Flying_Banana

Reputation: 2910

Not sure about iris classification, but I've done written digit recognition from photos. I would recommend tuning up the contrast and saturation, then use a k-nearest neighbour algorithm to classify your images. Depending on your training set, you can get as high as 90% accuracy.

I think you are on the right track. Do image preprocessing to make classification easier, then train an algorithm of your choice. You would want to treat each image as one input vector though, instead of classifying each pixel!

Upvotes: 0

Related Questions