next up previous index
Next: Towards a more effective Up: Encoding transition probabilities Previous: Encoding transition probabilities   Index



The storage of transitions

Let $\cal{S}$ be a sequence of prototypes $\left\{{\cal{P}}_{n},{\cal{P}}_{n-1},\ldots,{\cal{P}}_{2}, {\cal{P}}_{1}\right\}$. We want to predict or generate the next prototype ${\cal{P}}_{0}$ given the history $\cal{S}$. In order to do that, we need the probability :

\begin{displaymath}P \left({\cal{N}}_{} \vert {\cal{P}}_{n},{\cal{P}}_{n-1},\ldots,{\cal{P}}_{2},{\cal{P}}_{1} \right)\end{displaymath}

for all possible prototypes ${\cal{N}}_{}$.

As the prototypes are discrete, we can use Markov models to learn such probabilities from typical sequences [47]. In this case, as we want to predict a prototype given an history of $n$ prototypes, we are interested in $n$-th order Markov models or $n$-th order hidden Markov models.

Unfortunately, high order Markov models can be vastly more expensive than their first-order counterparts. The memory required to store all the transition probabilities and the processing time required to train them are serious issues especially for large number of states.

Here the states are defined by the prototypes. Due to the method used to find the prototypes (see section 4.3), the number of prototypes found in a video sequence can potentially be huge. The size of the prototype set can even be bigger if we add speed information in the encoding of prototypes. So the dimensionality has to be reduced in one way or another.

The variable length Markov model provides an efficient way of storing the transition probabilities [51]. It is an approximation of a $n$-th order Markov model where the transitions that do not bring useful informations are discarded.


next up previous index
Next: Towards a more effective Up: Encoding transition probabilities Previous: Encoding transition probabilities   Index

franck 2006-10-16