LEGO Vehicle Navigating Through Roads and Traffic Signs to a Given Destination

The objective of this project was to create an automated vehicle, which navigates according to both a route and traffic signs.

The objective of this project was to create an automated vehicle, which navigates according to both a route and traffic signs.
Another objective, was to add a function of “regulated navigation”, in addition to the random driving state, in which the vehicle is required to navigate towards one of two destinations, set in advance.
The vehicle’s central processing system is an iPAQ pocket PC, which is located on the vehicle, along with a camera, a transmitter and a motor controller.


The problem
Our main motivation was to simulate a real driving environment as good as possible, e.g. an environment including curved roads with turns and junctions, traffic signs and, of course, a destination.
To achieve these goals we needed to identify the path to be followed and navigate accordingly, decipher the traffic signs along the way and find the right route for the given destination. All this must be done in a relatively short processing time.



1 2
Figure 1 The vehicle Figure 2 The ring


Figure 3 The traffic signs


The solution
As mentioned, there are two destinations. The first one – ‘Home’, will have its own set of traffic signs, all of them purple.


Figure 4 Destination 1 traffic signs

When the destination given to the vehicle is ‘Home’, it will obey the above signs, as well as the general traffic signs.
The triangular indicates the destination itself and when the vehicle reaches it, he stops and the program ends.
The second destination is ‘University’, and the matching traffic signs are green:

Figure 5 destination 2 traffic signs


The destination is given by the user at the start of the program using the interface shown in figure 6.


Figure 6 User Interface


The Algorithm

Figure 7 Algorithm Diagram


As described in the flow chart above, the frame passes a few steps to determine the left and right edges of the path and a few other steps to determine the sign in the frame (if there is one).
When the sign is deciphered we know which edge of the road to follow, e.g. if the traffic sign is ‘turn left’ – follow the left edge of the road. To determine the left and right edges the system analyses the edges image to determine between the left path, the right path far paths (which will be ignored) and noise.


8 9
Figure 8 before and after categorizing edges


After finding the edges we need to decide in what angle should the motors be. The decision is made after calculating the avarege distance of the right/left path from the center of the frame, and comparing it to a certain threshold.

Figure 9 Calculating distance of left/right edge from center


The edge (left or right) from which we wish to calculate the angle is determined by the traffic signs.
We try to decipher a sign only when there are enough pixels in the frame that have the color of a sign (red, blue and purple/green – depending on the destination given), and that we have not already deciphered that sign.

Deciphering a sign has two stages:
1) Finding the objects in the frame that are potentailly a sign (that is, objects that have the color of a sign and have a general shape of a sign, or part of it). This is done by a known algorithm called the growing region algorithm.
2) When the objects in the frame are found we analyse their shape to see if they match one of the signs in our database.
When the type of the sign is detected we can determine the action of the wheels and go on to analyse the next frame.
For a more detailed description of the project and the algorithm behind it please see the attached project book.




The units on-board the vehicle:

A Watec WAT-270 video camera (1) receives the image of the surface, and through the FlyJacket i3800 device (2) the image is transferred to the Compaq iPAQ Pocket PC (3) for image processing. The results (navigation instructions) are forwarded by a serial cable (4) to the IR TOWER (5), which transmits the instructions on to the RCX 2.0 controller (6). The RCX operates the vehicle’s wheels (7). Two batteries (8) are on-board the vehicle, supplying the needed voltage to the Tower (9v), to the RCX (7.2v) and to camera (7.2v). More technical details regarding the hardware units and the control loop are in the attached Project Book.


Results and Conclusions
All of our goals in this project have been reached. Our vehicle navigates its way elegantly, analyses and obeys all the signs along the road, without losing it. We have developed and used simple and efficient algorithms, which manage to execute all the missions “”on-line””, even though the processing unit is much slower than a regular PC. Our algorithm is handling a “”noisy”” surface, and colored background.  Very strong sunlight can severely disturb the recognition of color. Handling T-junctions or 4-way junctions requires a different algorithm.

Future developments of this project can be learning the way to a destination (back-tracking), memorizing the way to a destination after traveling it once, or adding more traffic sings (U-turn, speed limit, dead-end, etc.).


Our project was implemented on a Microsoft eMbedded Visual C++ (version 3.0) platform. Our chosen algorithms were first tested on Matlab 6.5, and were afterwards transformed to C code, and placed in the proper spot in the program. A USB cable connection between the iPAQ and the PC enabled building an executable file of the program in the iPAQ’s memory. This connection also enabled de-bugging the program while it was running on the iPAQ.




We are grateful to our project supervisor, Johanan Erez, for his help and guidance throughout this work, and to the lab staff, Ina Krinsky and Aharon Yacobi, for their technical support and warm treatment. We are also grateful to the Ollendorf Minerva Center Fund for supporting this project.