next up previous index
Next: Introduction   Index



Active appearance models (AAM) are used to model objects from images using their shape and texture. AAM have been applied successfully in a large number of applications. Nevertheless, the active appearance model does not model correctly video sequences of animated objects yet.

The aim of this thesis is to add a time framework to the AAM search algorithm in order to model properly time in video sequences. We will then apply this extended model to the study of interaction of two speakers in a conversation. The idea is to build a user interface that can react to the user's reactions.

The method will use a statistical framework that is learned from a set of training video sequences. The series of parameters extracted from these video sequences will then be modeled by a set of sub-sequences. A higher level model will then learn how to organize the subsequences into meaningful sequences representing facial expressions or typical movement of the head. The interaction between two persons will be based on interview like training sequences. The joint behaviour of facial expressions will be model in order to be able to predict a behaviour even if we are only able to track a part of it.




next up previous index
Next: Introduction   Index

franck 2006-10-16