next up previous index
Next: Video sequence V2: change Up: Description of the data Previous: Description of the data   Index




Video sequence V1: shaking

The first video sequence we used to assess our behaviour model is a video of a person gesturing ``no'' with their head.

This video has been shot with a Canon Powershot A80 digital camera. It contains 317 frames. Figure 4.3 shows some frames extracted from the video sequence. The file examples/V1/V1_orig.m1v of the accompanying CD-ROM shows the full sequence.

Figure 4.3: Frames extracted from the video V1.
\includegraphics[width=145mm,keepaspectratio]{seq_non_orig.eps}

135 frames of this video have been marked up by hand using 15 points as depicted on figure 4.4. The hand labelled points as placed on the corners and on the centre of the mouth, on the eyes, on the nostrils and at the border of the face. The first mode of variation of the resulting appearance model can be seen on figure 4.5. The sequence has been successfully tracked using this model. The relative coordinates of the points as well as the pose, the scale and the position of the face, have been reduced to 7 parameters for each frame. The first 3 of those parameters represent the parameters controlling the expression of the face.

Figure 4.4: Hand labelling of a frame from video V1.
\includegraphics[width=145mm,keepaspectratio]{non_marked.eps}

Figure 4.5: First mode of variation of the appearance model extracted from video V1.
\includegraphics[height=30mm,keepaspectratio]{non_mode1.eps}

In this simple case, the parameters can be displayed on a graph. Figure 4.6 shows the points corresponding to the first 3 parameters extracted for each frame in the video V1. Figure 4.7 shows the projection of this graph onto the 2 first parameters, the two largest modes of variation of the face.

Figure 4.6: Projection of the first 3 appearance parameters extracted from video V1.
\includegraphics[height=145mm,keepaspectratio]{nonvplotaxes.eps}

Figure 4.7: Projection of the first 2 appearance parameters extracted from video V1.
\includegraphics[keepaspectratio]{nmmorigaxes.eps}

Figure 4.8 shows the synthesised version of the appearance parameters extracted from some frames of the video sequence V1.

Figure 4.8: Frames extracted from the video V1 after tracking.
\includegraphics[width=125mm,keepaspectratio]{seq_non.eps}

The file examples/V1/V1_track.m1v on the accompanying CD-ROM shows the sequence of synthesised version of the tracked face for all frames. This video sequence can be visually compared to the original video sequence to verify that the face has been tracked correctly.

The file tracked/V1_graph.m1v on the accompanying CD-ROM shows the generated version of the video sequence V1 after tracking, along with the graph from figure 4.6 plotted in real time. For each frame, the point corresponding to the synthesised parameter is added.

Since the graph of appearance parameters can be easily plotted for this video, we choose this sequence to illustrate our algorithms in the following sections of the thesis.


next up previous index
Next: Video sequence V2: change Up: Description of the data Previous: Description of the data   Index

franck 2006-10-01