next up previous index
Next: Conclusion Up: The acquisition of prototypes Previous: Improvements of the tracking   Index




Clustering trajectories into prototypes

The tracker described in the previous section gives a list of AAM parameters corresponding to faces positions during a sequence. In order to model this list of parameters vectors, Johnson and Hogg [39] proposes an algorithm that cluster the set of vectors into prototype vectors. This algorithm has not been used yet in our framework, but it will be potentially used in order to compare the approach of Johnson and Hogg with our future approach.

The algorithm also known as vector quantization (VQ) [49] is an unsupervised learning mechanism closely related to the well known $k$-mean algorithm. It is a classical method in signal processing. Each vector in the set can be approximated by one of the prototype vectors, usually referred to as codebook vectors.

Let $t$ be the epoch number and $N$ the number of iterations. The algorithm is derived as follow :

1.
Randomly place $k$ prototypes in the parameters space.
2.
Select the current training vector ${\bf x}(t)$ randomly from the list of parameters vectors.
3.
Find the prototype ${\bf m}_c(t)$ which is closest to this input using the formula :

\begin{displaymath}\Vert{\bf x}(t)-{\bf m}_c(t)\Vert=\min_i\left\{\Vert{\bf x}(t)-{\bf m}_i(t)\Vert\right\} \end{displaymath}

4.
The prototype ${\bf m}_c(t)$ is then moved closer to ${\bf x}(t)$ by :

\begin{displaymath}{\bf m}_c(t+1) \leftarrow \alpha(t)\left({\bf x}(t)-{\bf m}_c(t)\right)\end{displaymath}

where $\alpha(t)$ is given by:

\begin{displaymath}\alpha(t)=\left\{
\begin{array}{ll}
1-0.99\left(\frac{2 t}{N}...
...} \\
0.01 & {\rm if} \; t \geq \frac{N}{2}
\end{array}\right.
\end{displaymath}

The other prototypes do not move, so for $i\not= c$ :

\begin{displaymath}{\bf m}_i(t+1) \leftarrow {\bf m}_i(t)\end{displaymath}

5.
Repeat the steps 2,3 and 4 until the iteration $N$ is reached.

The factor $\alpha(t)$ is a learning parameter and is often referred as cooling schedule. It tells the algorithm how vigorously it has to learn the current vector. If $\alpha(t)$ is close to $1$, the vector is learned and all the previous vector are forgotten. As $\alpha(t)$ decreases, the vector the learning of new vectors is weaker and weaker, but the prototypes encodes more and more history of the previously learned vectors.

This approach is a potential candidate for clustering a trajectory into vectors, but $k$-means clustering can be chosen as wel.


next up previous index
Next: Conclusion Up: The acquisition of prototypes Previous: Improvements of the tracking   Index

franck 2006-10-16