Navigating a Lego Vehicle Inside the Road Lanes

This project is one of the first projects in THE VISION AND IMAGE SCIENCES LABORATORY which implements computer vision techniques on the LEGO-MINDSTORMS platform.

Abstract
This project is one of the first projects in THE VISION AND IMAGE SCIENCES LABORATORY which implements computer vision techniques on the LEGO-MINDSTORMS platform, the goal is to “teach” a Lego car some tasks to drive independently like human being does. The main goal of the project is to build independent Lego vehicle able to navigate inside the road lanes. The secondary goal is to program the vehicle to respond to some simple traffic signs (such as traffic lights).

The background
Nowadays, when the technology is accelerating more and more, and what was science fiction at the past is becoming a reality today. It is the right time to start implementing robots, which will be able to replace a man in some new tasks, like cleaning the house, shopping, driving… As we know, there are some tasks, where the robot is more accurate and productive than a human, so maybe we succeed in building a robot which will accomplish the work instead of us, better and faster, so why not teaching the robot to perform the jobs mentioned above. The project is a one of the series of projects, which try to “teach” a lego car to drive independently. Driving independently as human beings include some skills: driving inside the road lanes, bypassing obstacles, navigating according to traffic signs etc. In our project we concentrate on building an autonomic vehicle able to drive independently in the road lanes.

The solution
We are able to drive using our vision, processing the image we see and activating muscles to turn the wheel, pushing pedals and changing gears. We have implemented the same model. We have built a lego vehicle with a camera installed on and capturing the images. Those images are transferred to the computer from the vehicle on the radio waves. The computer is processing the images using our program and gives the direction to the vehicle should stick to (the output of the program is actually the speed of each one of the two engines connected to the vehicle’s wheels similar to the activating of muscles during human driving).

Tools and Environment
1. RCX unit – small programmable unit by LEGO based on the H8 by Hitachi, provides serial I/O, A/D, 16k ROM, 32K RAM and timers. It can get commands to activate the wheel’s speed from the computer by infra-red light.(Figure

2. Small camera – a small camera is installed on he vehicle to capture the images from the driver’s point of view.

3. Video transmitter – the comunication between the camera and the computer is done by a radio transmitter on the vehicle, and a reciever by the computer.

4. Rechargable 7.2V Battery – all the device installed on the vehicle are supplied by the rechargable 7.2V battery (Figure 4).

5. PC – The brains of the vehicle (our project) are stored in the Personal computer with a video card. The program is written on VC-6 with MFC.

6. Video reciever – the comunication between the camera and the computer is done by a radio transmitter on the vehicle and a reciever by the computer.

7. Infra Red USB tower – The computer can control the vehicle’s direction/speed giving commands to the RCX controller throught the Infra Red USB tower.

8. The road – To simplify the problem, we assumed that the road is gray (like the floor in the laboratory), the lanes are blue, and when there are enough red in the pixels in the image, means red light, and green pixels – green light.

 

Figure 1 Figure 2
1  

2

Figure 3 Figure 4
 

3

 

4

The algorithm
The following is the main chart of the algorithm (Figure 5)
5
First we capture the image from the camera which is installed on the vehicle. The image transferred to the computer from the video transmitter, to reciever and to the computer. Our program performs the following image processing.

Image Processing
To make the image processing easier, we did the following assumptions:The road is gray, and the lanes are blue. Another assumption is that the only red / green pixels of the image which are seen from the vehicle belongs to traffic lights, and there are no other objects on the screen. Here are the stage of the image processing.
1) The original RGB image is shown in Figure 6.

2) The RGB image is transferred to HSV. The blue part of the image is the road lanes. (Shown as B/W on Figure 7)

3) The blue image as B/W goes through median filter. (Shown on figure 8)

4) The next is edges detection. (The edges are shown on figure 9)

5) After the edges detection we used Hough transform, and choiced the best lines which represent the lanes (The chosen lines are shown in figure 10)

6) The red / green pixels are counted from the HSV image in order identifying the red / green lights.

Figure 6

6

Figure 7

7

Figure 8

8

Figure 9

9

Figure 10

10

Car control algorithm
By the count of red / green pixels, we determine if there is a red or green light on. If there are enough red pixels, the light is red, we stop the engines, and capture the next image. If there was a red light, and now there are enough green pixels, we calculate the direction of the car, and start the engines.

The calculation of the needed driving direction is done using a neural net with back propogation training method. The inputs for the net are the parameters of the lines, which the image processing gives us. (In case one of the lines is missing the relevant radius parameter is set to -999). There are 7 outputs of the net, each for direction. We choose the direction with the maximal output. (from -3 (left) to 3 (right)). After the calculation of the direction, we update the engines speed :

right_engine_speed = 4 – direction

left_engine_speed = 4 + direction

Than we send the engine’s speeds, and capure the next image.

Conclusions
The project is working well with some assumptions:

The lanes are continuous, and blue.

The road is gray.

There is nothing else in the vision field (except road lanes, and traffic lights).

There are some improvements which might become the continuations of the project:

1) “”Preprocessor””for our project to overcome the assumptions above.

2) Improving the direction choosing from the found lines using more sophisticated control algorithms. (using the history)

3) Since there are some finished projects in this area such as bypassing obstacles, navigating according to traffic signs and our project – navigating inside road lanes, another possible project can be bringing them together, to one smart driver.

Acknowledgement
We would like to thanks the “Ollendorff Minerva Center”” Fund which supported this project.

Thanks to Johanan Erez for supervising.

Thanks to Aharon Yacoby for the technical support.

Thanks to Dror Ouzana for helping us with the classes for Microsoft Visual C++ to support the VideoOcx (sampling images from the camera) and Phantom class for using the RCX (car controller) of the lego kit.