Eye Tracking

This Projects deals with tracking the eye movement and determining the point of gaze of the eye.

Abstract
This Projects deals with tracking the eye movement and determining the point of gaze of the eye.
In this project an image of the eye is acquired. Few processes are run on the acquired image, and give as output new images that emphasize specific eye features (such as pupil, cornea reflections and eyelids).
The location of these eye features can be determined by analyzing these images, and according to these location the angle of gaze of the eye can be calculated.

The Algorithm
All of the optical techniques, rely to some extent on the following simple principles. If a landmark is fixed to a sphere, rotation of the sphere about its center
will cause a translation of that landmark proportional to the sine of the rotation angle. The relation for a single axis is.
1. d = r sin O
2. O = sin-1(d / r)
where d is the landmark translation and r is the distance from the center of the sphere to the landmark (see figure 1)
1
Figure 1: a sphere and a landmark fixed to it
Of course, if the entire sphere translates with respect to the sensor, the landmark will translate by the same amount. Use of equation 2 would
result in an erroneous angle computation

3. OT = Sin-1(dT / r)

where dT is translation of the entire sphere along the sensitive plane of the detector and ???is the erroneous rotation angle calculation.
Correct computation of Equation (1) in the presence of translation requires some means to distinguish landmark motion due to rotation and
that due to translation. If translation of the entire sphere is measured by some independent means, this value can simply be subtracted from d.
Another approach is to detect the position of two landmarks fixed to the sphere, but located at different radii from its center. Two such
landmarks will move together if the entire sphere translates, but will move differentially if the sphere rotates

4.2
5. 3
6. 4

Note that, whereas d1 and d2 are functions of both the rotation and translation of the sphere, ?d (the relative motion of the two landmarks) is a
function only of rotation.

In these implementation the two landmarks, which were chosen, are the Pupil and the Reflection from the Cornea. (See figure 2)

5
Figure 2: the location of the pupil and the cornea reflection in relation with the sensor surface and the illuminiation axis

The System Structure
The system is consisted of 3 major parts:

Electro-optical system – this part is responsible on Illuminating the eye and grabbing the picture of the eye, that will use for further
analyzes.
Image processing system – this part is responsible on the image processing and image analyzes procedures.
Control & Supervise system – this system is the user interface. It displays the kernels and images, which are produced in the Image
processing system. It also contains additional information and options of controlling the whole process (histogram of different parts of the
eye, changing the illumination levels, changing the kernels etc.) see figure 3

6
Figure 3: Control & supervise system (the user interface)

Image Processing
3 kernels were run on the picture that was grabbed by the Electro-optical system. Each one was build to emphasize a certain eye feature. A
simple convolution was used to run the kernels on the eye image. Each kernel had a shape, which was close to the shape it had to emphasize,
and therefor the maximum value of each convolution was received in the location of the specific eye feature. See figures 4,5,6.

78
Figure 4: The kernel for the pupil and its output image
910
Figure 5: The kernel for the cornea reflections and its output image

Image analyzes was preformed on the images with emphasized eye features. The analyzes was preformed on blobs (an area of close pixels
which are on the same state) and enabled me to refer characters of groups of pixels and not to a single pixel. From the blob analyzes, the
exact location of the relevant eye features (i.e. center of pupil and cornea reflections) could be determined, and by using these locations, the
angle of gaze of the eye could be calculated as described above.

Tools
The software was developed following the in Visual Basic environment and used Matrox Image Library (MIL). The project was written under the
Win95/NT operating systems.
The following hardware was used in this project:

Frame grabber – MATROX – METEOR II multi channel
Infra Red 850nm Leds.
D2A DATATRANSLATION DT-331
Switches card – KEITHLEY PIO – 32
Camera – WATEC 505 EX

Conclusion
In this project using simple elements of image processing and emphasizing the correct features of the eye enabled us to calculate the point of
gaze of the eye.
These data can be useful in many systems such as weapon systems, car drivers, pilots etc.

Acknowledgments
Thanks for all who took part in helping me through this project. Special thanks for Hanan Shamir, whose help was with great importance.
Thanks to Eyal Regev, my supervisor at ELBIT and Johanan Erez, my supervisor at Technion. Thanks to the “Ollendorff Minerva Center” for supporting the VISL lab and enabling to perform this project