next up previous index
Next: Towards a more effective Up: Encoding transition probabilities Previous: Encoding transition probabilities   Index



The storage of transitions

Let $\cal{S}$ be a sequence of states $\left\{{\cal{G}}_{n},{\cal{G}}_{n-1},\ldots,{\cal{G}}_{2}, {\cal{G}}_{1}\right\}$. We want to predict or generate the next state ${\cal{G}}_{0}$ given the history $\cal{S}$. In order to do that, we need the probability:

\begin{displaymath}P \left({\cal{N}}_{} \vert {\cal{G}}_{n},{\cal{G}}_{n-1},\ldots,{\cal{G}}_{2},{\cal{G}}_{1} \right)\end{displaymath}

for all possible states ${\cal{N}}_{}$.

We can use Markov models to learn such probabilities from typical sequences [77]. In this case, as we want to predict a pathlet model given an history of $n$ pathlet models, we are interested in $n$-th order Markov models or $n$-th order hidden Markov models.

Unfortunately, high order Markov models can be vastly more expensive than their first-order counterparts. The memory required to store all the transition probabilities and the processing time required to train them are serious issues especially for large number of states.

Here the states are defined by the pathlet models. We call them: ``pathlet states''. Depending on the method used to find the pathlet models (see chapter 5) and the training video sequence, the number of pathlet models found can potentially be huge. The size of the pathlet model set can even be larger if we wish to model interaction[*]. Some form of dimensionality reduction is thus required.

The variable length Markov model provides an efficient way of storing the transition probabilities [81]. It is an approximation of a $n$-th order Markov model where the transitions that do not bring useful information are discarded.


next up previous index
Next: Towards a more effective Up: Encoding transition probabilities Previous: Encoding transition probabilities   Index

franck 2006-10-01