next up previous index
Next: Corridor following experiments Up: Mechanisms Previous: Quasi-Newton algorithm   Index



On-line Pattern Associator

A Pattern Associator is a neural network with only one layer of neurons. This architecture was only used with one neuron because we only needed one output. The architecture used is shown in figure 2.10:

Figure 2.10: Pattern Associator. Each input $i_k$ is weighted by the weight $w_k$. The activation function of the neuron is the identity function.
\begin{figure}\begin{center}
\epsfbox{pattasso.ps}
\end{center}
\end{figure}

The output of the Pattern Associator is computed using the formula:

\begin{displaymath}o=\sum_{k=1}^Nw_k\cdot i_k=\overrightarrow{w}\cdot \overrightarrow{\imath}\end{displaymath}

The aim is still to minimize error between the expected value and the computed value for given inputs. So the weights are updated using the following formula [20,19]:


\begin{displaymath}\overrightarrow{w}\leftarrow \overrightarrow{w}+\eta\left(t-o\right)\overrightarrow{\imath}\end{displaymath}

where $o$ is the current output generated by the Pattern Associator, $t$ the corresponding expected value, $\overrightarrow{w}$ the weights at the current iteration, $\overrightarrow{\imath}$ the current inputs and $\eta$ a constant called learning rate. The learning rate may vary from one iteration to another. Considering the same arguments as the multi-layer Perceptron, the following formula was used:


\begin{displaymath}\eta=\frac{1}{\left\Vert(t-o)\overrightarrow{\imath}\right\Vert}\end{displaymath}

If the training examples are linearly separable, the Pattern Associator converges [19]. Furthermore, another advantage of the Pattern Associator is its low memory consumption. Each learning step, that is each application of the update formula, only requires the current inputs $\overrightarrow{\imath}$ and the current target $t$. It is not necessary to store all the former training values. This avoids all the memory problems we had with the multi-layer Perceptron. The update formula is also simple. It does not require a lot of computing power. These points are key points in embedded systems. So the Pattern Associator is well suited for robotics, as soon as the training examples are linearly separable.


next up previous index
Next: Corridor following experiments Up: Mechanisms Previous: Quasi-Newton algorithm   Index

franck 2006-10-15