next up previous index
Next: Probabilistic Hough transform Up: Vision tools for the Previous: Histograms   Index




Hough transform

If the camera does not point in a fixed direction, the left wall is not always on the left-hand side of the image, and the right wall is not always on the right-hand side of the image. So in this case, the computation of an histogram of the image is not a solution.

A feature that can be computed is the angle of each of the lines which constitute the edges image, and a fixed direction. This idea has already been implemented in industrial robots such the plant-scale husbandry robot described in [31]. I have taken the vertical direction as the reference direction. It is good to notice that this feature is relevant only if the camera is pointing upwards or downwards, or only the upper or the lower part of the image is used. Indeed, in the edges image, a wall produces lines that form angles of different sign with the reference direction, depending on whether they are in the upper or the lower part of the image. However, if the camera is looking upwards, a wall generates lines that form angles of the same sign, and the other wall generates lines that form angles of the opposite sign. Indeed, all the strong lines seem to intersect at the end of the corridor. Figure 2.4 summarizes this.

Figure 2.4: Edge-detected images of a corridor. Figure 2.4(a) shows an edges image of a corridor when the camera is looking forwards and figure 2.4(b) shows an edges image of the same kind of corridor when the camera is looking upwards.
\begin{figure}\begin{center}
\subfigure[Camera looking forward]{\epsfysize =4c...
...ing upward]{\epsfysize =4cm \epsfbox{corrdemi.eps}
}
\end{center}
\end{figure}

The problem is to calculate the angle of each line in the edges image. A common way to do this is to use a Hough transform. The Hough transform is described in [27]. The idea is to use an accumulator array which computes a strength for each line by a voting mechanism. This accumulator array is called Hough space. The dimension of the Hough space is defined by the number of geometric parameters we need to describe a pattern. These parameters describe the position or the scale of the pattern we want to recognize. We must be able to restore the original pattern using these parameters. Here, the pattern is a line, so the dimension of the Hough space is two.

I chose the two parameters as follows. The first one is the distance between the centre of the image and the line, and the second one is the angle between the line and the vertical. So each line in the edge-detected image corresponds to a unique point in the Hough space.

In order to find the lines that are correctly detected in an edge-detected image, each point that forms a line will vote for the corresponding parameters. So in the Hough space, a strong value correspond to a line which is well detected by the edge detector, that is, a solid line in the edge-detected image. We can then select the best lines by selecting the stronger values in the Hough space. This procedure is reliable and widely used in machine vision. Figure 2.5 shows an example of line detection using a Hough transform.

Figure 2.5: Example of the Hough space of an image. The figure shows the result of an edge detection and its corresponding Hough space. An example of the effect of a line and a group of lines on the Hough space is shown. The thresholded Hough space also shows that these lines are strongly detected by the edge detector. d_max is half the length of a diagonal of the image.
\begin{figure}\begin{center}
\epsfbox{houghprin.eps}
\end{center}
\end{figure}

In order to compute the Hough space, we have to change one of the two parameters, for each point in the original image. Knowing that the point we are dealing with is on the line, if we know the distance from the line to the centre, we can find the orientation of the line. It is the same if we know the orientation of the line, we can then find the distance between the line and the centre of the image. The second calculation seems to be simple, and so more efficient on a computer.

Figure 2.6 shows the situation when we try to compute the distance between a line and the centre of the image. $(x_0,y_0)$ denotes the point we are currently considering, and $(x_1,y_1)$ denotes the point located at the centre of the image. $D$ denotes the distance between the line and $(x_1,y_1)$ and $\overrightarrow{V}$ is the normal to this line. The norm of this vector is $1$. Finally, $\theta$ denotes the angle between the vertical and the line. The aim is to compute $D$ using the other parameters.

Let $\overrightarrow{P}$ denotes the projection of $\overrightarrow{W}$ on $\overrightarrow{V}$. We have:

\begin{displaymath}D=\left\Vert\overrightarrow{P}\right\Vert=\frac{\left\vert\ov...
...htarrow{W}\right\vert}{\left\vert\overrightarrow{V}\right\vert}\end{displaymath}

Furthermore,

\begin{displaymath}\overrightarrow{V}=\left(
\begin{array}{c}
\cos \theta \\
\sin \theta \\
\end{array}
\right)\end{displaymath}

and

\begin{displaymath}\overrightarrow{W}=\left(
\begin{array}{c}
x_1-x_0 \\
y_1-y_0 \\
\end{array}\right)\end{displaymath}

So

\begin{displaymath}D=\left\vert\cos(\theta)\left(x_1-x_0\right)+\sin(\theta)\left(y_1-y_0\right)\right\vert\end{displaymath}

Figure 2.6: Calculation of the distance for the Hough transform. $(x_1,y_1)$ denotes the point located in the middle of the image and $(x_0,y_0)$ denotes the point we are currently using in the algorithm.
\begin{figure}\begin{center}
\epsfbox{proof.ps}
\end{center}
\end{figure}

So we can see that the distance $D$ can be easily computed if we know the angle $\theta$. Therefore I used the angle $\theta$ as the varying parameter to compute the Hough space.

The following algorithm is then used to compute the Hough space (the Hough space is represented by a two dimensional array in this algorithm):

compute the edge-detected image using an edge detector.

set all the elements of the array to 0.

for each point in the edge-detected image that is detected 
as part of an edge, do:
     for each angle between -90 degrees and 90 degrees, do:
          * compute the distance between the line which goes 
            through the point and is directed by the angle, 
            and the centre of the image.
          * in the array, increment the value of the element 
            labelled by the angle and the computed distance.
     done.
done.

for each element of the array do:
     * if the value of this element is greater than a fixed 
       threshold then the element is set to 1.
     * set the element to 0 otherwise.
done.

return the array.

This procedure returns an array which is labelled by angles and distances. We can then use the angle by computing a histogram of the Hough space in the same manner as we computed histograms of images. We can also use the distance information by computing the corresponding histogram. Nevertheless, the distances seem to be less important than the angles. So I used wide stripes to compute the histogram for distances and small stripes to compute the histogram for angles. I then concatenate those two histograms and use it as input to a neural network.


next up previous index
Next: Probabilistic Hough transform Up: Vision tools for the Previous: Histograms   Index

franck 2006-10-15