next up previous index
Next: Off-line multi-layer Perceptron Up: Vision tools Previous: Probabilistic Hough transform   Index



Vision tools for the egomotion prediction

Egomotion estimation requires the notion of difference between images. A distance cannot be computed with only one picture. We need two pictures to compute it.

Egomotion can be computed, in some circumstances, using optic flow. Optic flow is detailed in a number of articles and books [27,10,23,2]. For this thesis, we only want a neural network to estimate the right computation. So we do not need to compute optic flow explicitly. The only preprocessing used was edge detections and histograms.

In order to compute the egomotion, I used histograms of the two images in both vertical and horizontal directions. In this case we need both horizontal and vertical information because almost all the points of the original image move in both directions. So both kind of histograms provide information. For both images, I concatenated the vertical and horizontal histograms and subtracted the two resulting histograms to provide a set of inputs for the neural network.

The last step is justified by the fact that the neural network takes a linear combination of the inputs and there is no reason to give greater importance to one image compared to another. Furthermore, this operation, in addition to the previous operations, reduces the number of inputs we have to provide to the neural network. This is a worthwhile operation because it reduces the complexity of the neural network and also the complexity of the calculations. Decreasing the number of inputs saves us a lot of computation time, which is a critical resource in our application with a 486 PC.

Histograms do not seem linearly dependent on the distance. Indeed, an object can generate different changes in the image for the same travelled distances, depending on its position in the scene. A object which is close to the camera generates big changes and an object which is far away generates almost no changes in the histograms. Therefore a simple on-line Pattern Associator is not sufficient to learn the relationship between histograms and distances. I have used a multi-layer Perceptron instead of a simple Perceptron in order to learn this relationship.


next up previous index
Next: Off-line multi-layer Perceptron Up: Vision tools Previous: Probabilistic Hough transform   Index

franck 2006-10-15