next up previous index
Next: Detecting pairs of eyes Up: The tracker Previous: The tracker   Index



Modelling pairs of eyes

In order to be able to recognize eyes in a image, we first need a model of eyes. The model is computed using an orientation histogram.

First, a Canny edge detector is applied to the face image [54]. This gives an edge image. In order to model an eye, the gradient of the image is computed near the known location of the eye. This is done for the eight possible directions. This gives a set of edge strength values that we can represent by a vector $v_i$ for each image $i$ in the training set. We then apply a principle component analysis (PCA) to this set of vectors and obtain a set of eigenvectors and eigenvalues that models the distribution of orientation histograms.

We can see in real live images that a lot of regions could be recognize as being an eye in those images. That is why it is better to model pairs of eyes instead of eyes alone. This allow to use the relative distance between eyes and also the angle between the horizontal and a line that join both eyes. Figure 4.1 shows how we can apply a grid on the eyes given the relative distance between them, in order to make use of the neighborhood of each eye. The previous procedure is applied in the same manner but the nine vectors corresponding to the nine squares area are concatenated before applying PCA.

Figure 4.1: Grid applied on eyes for the modelling of pairs of eyes.
\begin{figure}\begin{center}
\epsfbox{eyesgrid.eps}
\end{center}
\end{figure}

A model of the relative position between the eyes can also be obtained by applying a PCA the distance between eyes and the angle with the horizontal.


next up previous index
Next: Detecting pairs of eyes Up: The tracker Previous: The tracker   Index

franck 2006-10-16