next up previous index
Next: Generating a new trajectory Up: Generation of new behaviour Previous: The VLMM tree for   Index



Generating a new sequence of pathlet models

Suppose that we have already generated a sequence of pathlet states

\begin{displaymath}
{\cal S} = \left\{{\cal{G}}_{n},{\cal{G}}_{n-1},\ldots,{\cal{G}}_{2}, {\cal{G}}_{1}\right\}
\end{displaymath} (33)

We want to extend that generated sequence with another pathlet state ${\cal{G}}_{0}$.

In order to do that, we test every possible new sequence. For each pathlet state ${\cal{G}}_{0}$, we extract the probability of the sequence $\left\{{\cal{G}}_{n},{\cal{G}}_{n-1},\ldots,{\cal{G}}_{1}, {\cal{G}}_{0}\right\}$. This gives us a probability distribution for the choice of the pathlet state ${\cal{G}}_{0}$. We can then stochastically sample from that probability to find the pathlet state to add.

Figures 7.2 and 7.5 show this approach (the two figures show parts of the same generated tree). Suppose $\cal{S}$ is composed of the pathlet states depicted in the nodes A1 and A2 of the tree (figure 7.2). We extend the trajectory generated by the sequence $\cal{S}$ with one pathlet.

Figure 7.5: VLMM tree of pathlet states for the trajectory T2. Only a part of the tree is shown.
\includegraphics[height=90mm,keepaspectratio]{leaftrajtreepred.eps}

For each node on the first level of the tree, we look for the history $\cal{S}$ in the next couple of levels. For instance, for the node A3, the history $\cal{S}$ can be found in the tree and the whole resulting sequence is depicted by A in figure 7.2. The probability for that sequence is read in the node A1.

For the node B4, the history $\cal{S}$ cannot be found. The last elements of $\cal{S}$ cannot be found either, so the probability associated with the sequence we are looking for is read from the node B4 itself, and we multiply that value by a uniform probability for each element of $\cal{S}$. The resulting probability is therefore:

\begin{displaymath}
\frac{P_{B4}}{{\vert\Sigma\vert}^2}
\end{displaymath} (34)

where $P_{B4}$ is the probability read in the node B4 and $\vert\Sigma\vert$ is the number of pathlet models.

For the node C2 in figure 7.5, only the last element can be observed in the tree. Indeed the node C1 encodes the same pathlet state as the last element of $\cal{S}$ which is the pathlet state seen in node A2. In that case, a uniform probability is chosen for the remaining pathlet state in the sequence $\cal{S}$ and the probability of the sequence we are looking for is:

\begin{displaymath}
\frac{P_{C1}}{\vert\Sigma\vert}
\end{displaymath} (35)

where $P_{C1}$ is the probability read in the node C1.

For node E1 in figure 7.5, the history cannot be found in the tree. The probability is:

\begin{displaymath}
\frac{P_{E1}}{{\vert\Sigma\vert}^2}
\end{displaymath} (36)

where $P_{E1}$ is the probability read in the node E1.

Finally, for node D3 in figure 7.5, the history can be found in the tree and the probability can be read directly from the node D1.

We are computing such probabilities for all the remaining possible pathlet states. A random sampling for the computed probabilities gives us the way to extend $\cal{S}$, as described in section 6.4.


next up previous index
Next: Generating a new trajectory Up: Generation of new behaviour Previous: The VLMM tree for   Index

franck 2006-10-01