Suppose that we have already generated a sequence of pathlet states
![]() |
(33) |
In order to do that, we test every possible new sequence. For each pathlet state , we extract the probability of the sequence
. This gives us a probability distribution for the choice of the pathlet state
. We can then stochastically sample from that probability to find the pathlet state to add.
Figures 7.2 and 7.5 show this approach (the two figures show parts of the same generated tree). Suppose is composed of the pathlet states depicted in the nodes A1 and A2 of the tree (figure 7.2). We extend the trajectory generated by the sequence
with one pathlet.
For each node on the first level of the tree, we look for the history in the next couple of levels. For instance, for the node A3, the history
can be found in the tree and the whole resulting sequence is depicted by A in figure 7.2. The probability for that sequence is read in the node A1.
For the node B4, the history cannot be found. The last elements of
cannot be found either, so the probability associated with the sequence we are looking for is read from the node B4 itself, and we multiply that value by a uniform probability for each element of
. The resulting probability is therefore:
![]() |
(34) |
For the node C2 in figure 7.5, only the last element can be observed in the tree. Indeed the node C1 encodes the same pathlet state as the last element of which is the pathlet state seen in node A2. In that case, a uniform probability is chosen for the remaining pathlet state in the sequence
and the probability of the sequence we are looking for is:
![]() |
(35) |
For node E1 in figure 7.5, the history cannot be found in the tree. The probability is:
![]() |
(36) |
Finally, for node D3 in figure 7.5, the history can be found in the tree and the probability can be read directly from the node D1.
We are computing such probabilities for all the remaining possible pathlet states. A random sampling for the computed probabilities gives us the way to extend , as described in section 6.4.