The results of this experiment were bad. The robot simply went toward the walls instead of following the direction of the corridor. Even when a multi-layer Perceptron was used to learn the mapping between processed images and steering angles, it did not improve anything. The neural network can learn the examples from the training set, but is unable to generalize. The error computed on a test set kept increasing as the robot was trained.
The problem with this experiment is the way of teaching the robot. Using the joystick to teach the robot introduces some inconsistencies in the training set. Indeed, it is very hard for a human to know exactly the right speed to give to the steering. If the experimenter turns the robot too slowly, the robot tends not to turn at all in the testing stage. If the experimenter turns the robot too quickly, the robot can turn roughly between two frames and look in the wrong direction for the next frame. Unfortunately, the computation of the Hough transform on a 486 PC is so slow that we were always in the second case, even if we turned the robot slowly.
So the way of training the robot had to be changed. The geometry of the robot does not allow humans to estimate correctly the steering needed to follow a wall. Indeed, the camera is placed near the ground and is looking upwards. It is not the usual way of following a corridor for a human. So we opt for an automatic learning, that is, a program which taught the robot how to follow a corridor. The robot should then be able to generalize what it has learned. There are advantages and drawbacks to this method. The main advantage is that the robot is able to learn without the intervention of the human, so the learning stage produces a consistent set. Another advantage is that the robot can use other landmarks, not those that the designer used. A drawback is that the programmer has to find a way of achieving automatic learning. So he has to program a corridor follower.