Reputation: 1393
I am looking for some suggestions on how to approach the following computer vision problem. Below are 4 samples of an eye tracking dataset that I am working with. I would like to write code takes one such image and calculates the (x,y) position of the center of the pupil. I am currently using MATLAB, but I am open to using other software too.
Can someone recommend an approach I could use for this task? Here are some things I already tried but didn't work TOO well.
Any comments/suggestions would be appreciated!
EDIT: thanks for the comment Stargazer. The algorithm should ideally be able to determine that the pupil is not in the image, as is the case for the last sample. It's not a big deal if I lose track of it for a while. It's much worse if it gives me wrong answer though.
Upvotes: 10
Views: 8146
Reputation: 11
import java.awt.Robot;%Add package or class to current import listimport java.awt.event.*;robot = Robot();objvideoinput('winvideo',2);%to set the device ID and supported format set(obj, 'FramesPerTrigger', Inf);% trigger infinite set(obj, 'ReturnedColorspace', 'rgb')%video in RGB format obj.FrameGrabInterval = 5;%the object acquires every %5th frame from the video stream start(obj)% to start the vedio time=0;NumberOfFrames=while(true)data=getsnapshot(obj);image(data);filas=size(data,1);columnas=size(data,2);% Centercentro_fila=round(filas/2);centro_columna=round(columnas/2);figure(1);if size(data,3)==3data=rgb2gray(data);% Extract edges.BW = edge(data,'canny')[H,T,R] = hough(BW,'RhoResolution',0.5,'Theta',-90:0.5:89.5);endsubplot(212)piel=~im2bw(data,0.19);piel=bwmorph(piel,'close');piel=bwmorph(piel,'open');piel=bwareaopen(piel,275);piel=imfill(piel,'holes');imagesc(piel);% Tagged objects in BW imageL=bwlabel(piel);% Get areas and tracking rectangleout_a=regionprops(L);% Count the number of objectsN=size(out_a,1);if N < 1 || isempty(out_a) % Returns if no object in the imagesolo_cara=[ ];continue end % Select larger area areas=[out_a.Area];[area_max pam]=max(areas);subplot(211)imagesc(data);colormap grayhold on rectangle('Position',out_a(pam).BoundingBox,'EdgeColor',[1 0 0],...'Curvature', [1,1],'LineWidth',2)centro=round(out_a(pam).Centroid);X=centro(1);Y=centro(2);robot.mouseMove(X,Y);text(X+10,Y,['(',num2str(X),',',num2str(Y),')'],'Color',[1 1 1])if X<centro_columna && Y<centro_fila
title('Top left')elseif X>centro_columna && Y<centro_fila
title('Top right')elseif X<centro_columna && Y>centro_fila
title('Bottom left')else
title('Bottom right')
Upvotes: 1
Reputation: 176
OpenCV with Python, C, C++, Java and others would be a good tool for doing that. There is a tutorial for Python here: http://docs.opencv.org/trunk/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html, but there are definitely other tutorials out there for the other supported languages. OpenCv has a number of Haar Cascades right out of the box, one for eye detection included. If you actually wanted to implement a solution using HoughCircleTransform, OpenCv has the appropriate function for that, too.
Upvotes: 1
Reputation: 99
Use OpenCV integrated Python . . . It will be very easy for the beginners to work on OpenCV.
Procedure :
* If you are using normal webcam
1. First process the frame with VideoCapture function
2. Convert it into Gray Scale Image.
3. Find Canny Edges using cv2.Canny() function
4. Apply HoughCircles function. It will find the circles in the image as well as center of the image.
5. Use the resulting parameters of HoughCirlces to draw the circle around the pupil. Thats it.
Upvotes: 5
Reputation: 1808
I'm not sure if this can help you, because you are using a dataset and I don't know your flexibility/needs to change the capture device. Just in case, let's go.
Morimoto et al. use a nice camera trick. They created a camera with two sets of infra-red leds. The first set is put near the camera lenses. The second one is put far from the lenses. Using different frequencies, the two leds sets are turned on in different moments.
Retina will reflect the light from the set near the camera lenses (that is the same thing about the red eye problem in photography), producing a bright pupil. The other set of leds will produce a dark pupil. Compare the results. So, simple difference between the two images give you a near perfect pupil. Take a look in the way that Morimoto et al. explore the glint (nice to approach sight direction).
Upvotes: 6