Mapping Surface by Exploration Vehicle

The goal of this project was to build a vehicle which navigates according to visual information, explores a certain area and draws a map of the obstacles that are met during the exploration.

Abstract
The goal of this project was to build a vehicle which navigates according to visual information, explores a certain area and draws a map of the obstacles that are met during the exploration. A little vehicle was assembled using LEGO bricks and a miniature video camera attached to it. The exploration area was marked by a closed surface with white walls and colorful objects placed between the walls. The mission of the “”LEGO car”” was to move along black lines which were drawn on the flour. Then it had to explore the whole area, identify obstacles and at the end – draw a map of the explored area containing the site of all obstacles met. The the vehicle had to explore the same area again, find all the obstacles that were recognized before and draw a new map.

The basic approach
Tracking of surface, capture and analyze picture:
Detecting specific colorful stationary objects in the surface, when the vehicle tracking the surface according to black paths on the floor. Measurement of objects coordinates. When object detected it should be photographed pictures of the objects. Map with the objects is built according to our measurements when the vehicle passed the whole surface. The surface can be tracked as many times as the user wants
Control the movement of the robot:
The vehicle’s movement is commanded by wireless remote-control devices only.
Integrating the systems:
Creating one environment which integrates the use of hardware and software components of varying interfaces.

Project Environment
The Vehicle
1
Figure 1- Project Environment

1. The vehicle built from Lego Mindstorms:
2
Figure 2 – The vehicle

2. RCX controller with downloaded firmware 2.0, and downloaded program in NQC language, that can receive orders from a computer and control the vehicle, and can send information from the sensors to a PC:
3
Figure 3 – RCX controller

3. Miniature Video camera Watec WAT-207A is attached to the vehicle in a front side on the top:
4
Figure 4 – Video camera

4. Video transmitter that transmits images from the camera to the video receiver, which is connected to the PC.
5
Figure 5 – Video transmitter

5. From each side is attached a rotate sensor to wheels. It counts the number of rotations wheels have done.
To the front of the car is attached a light sensor that can identify the black line on a floor.
67
Figure 6 – Rotate sensor Figure 7 – Light sensor

6. Batteries – two batteries: one supplying power to RCX and video camera, value: 7.2V, the other sustaining the transmitter, value: 10.8V.
8
Figure 8 – Batteries

The computer
1. PC (Computer) – the majority of this project is implemented and run on a PC with 2 pairs of USB. Mainly programmed using Visual Studio 6 containing MFC and ActiveX classes.
9
Figure 9 – PC (Computer)

2. Infra red tower that broadcasts signals from PC to the RCX, and vice versa.
3. Video receiver that receives pictures from the video transmitter and sends them to the PC.
10
Figure 10 – Infra red tower and video receiver

The Arena
The vehicle execute its assignment properly only if everything in the surface of the “right” color (the ” right ” colors described below). Wall is white , floor is gray, path is black, obstacles is any color with high saturation, and the upper part of the obstacles is white.
11
Figure 11 – Arena

Software environment

  • Visual c++
  • Nqc
  • MFC

Solution
Noise cleaning
1. There are unexpected jumps in brightness of the pictures
When we divide picture by threshold, means make all the pixels under threshold one color (black), and above it another (white), this is very complicated, since we can’t know in advance what threshold to choose. After the division the object should be found in the pixels under the threshold. The solution to this problem is to use dynamic threshold. We give it initial value, and after dividing the picture we check if the objects we got have the wanted sizes. If so, we can continue in a mission. If not, and the object is too big, we lower the threshold, and do the same division. If the object is too small, we increase it. And so on until we get the needed size of the object. This, of course, can be done only if we know in advance the estimated size of the searched object/form.
2. Images with noises and ” snow ” .
We can’t know if the image we captured is the one the Video camera ” sees ” or is it very disturbed and not the required one. Therefore, every time we capture an image and process it we do the process 3 times. There are two different ways to do it:

a. When we need a final picture, we capture the image 3 times, from the same place, and process each image – means turning it black and white according to the threshold. The final image will be composed of the three: if the same pixel in at least two of them is black – it will be black, else it will be white.
b. When we need to calculate the amount of the pixels with given value, we do it on 3 different images captured in the same place, and find the average value from the three.

This way if at least 2 pictures from 3 are reasonably real we can achieve good result.
3. Salt and peper noise
Salt and paper noise cleaned by Median filter.

The recognition of the black line

We capture gray image. Direction of the line checked only in the close part of the image to the vehicle. Now we check from the middle of the image where the black line is, by counting number of pixels that darker then some threshold on each side (left or right). The line is on the side that contains more such pixels .Acording to this the vehicle turns a little to the side of the line and straightened according to the line.

The recognition of the wall
We using the recognition of the wall in two cases: in one case checking the proximity of the vehicle to the wall and in another to the obstacle, with distinction of the height of the the rectangle to be checked. If number of very bright pixels (according to particular threshold) in the rectangle is more than 98.5% then the rectangle considered to be white , which mean that wall or obstacle discovered . If the amount of white pixels less than that, then a little darker pixels considered as white too (threshold increased). This percentage was chosen because, if the average size of obstacle or line on the picture are known (and therefore known the amount of very dark pixels) we can know how much pixels will be considered as dark even in the extreme cases.

The obstacle recognition:

The image captured in RGB. Transformed to HSV:

12
Figure 12 – HSV

Each pixel with values of R, G and B higher than 80 – means not too dark, and S (=saturation) higher than 0.4 (the lowest saturation value is 0, and the highest value is 1) – means with colorful spectrum, turns black and all the other pixels turn white. At the end of the process all the obstacles in the image will be black and the background will be white. This we do using the 2 methods for cleaning noise, mentioned above. Also we use median filter to clean shot noise, 5 times, on each image.
13
Figure 13 – One (of three) of the captured images and the final image of an obstacle

In the black and white pictures with the images of the seen obstacles, the average width of the obstacle in the middle of picture is calculated (the one that is on the route of the car).
Aproximation of distance to the object:
In order to capture the picture of the object at right distance from the object. Aproximate distance is calculated by perspective law: growth in visible width of the object as function of distance the vehicle passed since the first picture till the second.

14
Figure 14 – Approximation of distance to the object by perspective

Results
The results that have been achieved:

  • The vehicle moves along the black lines that define its course. In case there is a deviation from the course the vehicle repairs its track and continues in the correct direction
  • The obstacles are identified by their color saturation. From 2 sequential capturing and calculation of the width of the obstacle, the distance to it is calculated. The vehicle approaches the obstacle till the point the bypassing is needed and bypasses it
  • The wall is identified from very close distance as needed, however this distance varies a few centimeters each time
  • The locations of the obstacles are saved in linked list, and allow drawing an updated map at the end, with the real locations of the obstacles, with about 10% mistake in the X coordinate
  • After the second exploration all the missing obstacles are found and their coordinates are presented to the user. In the new map, all the current obstacles are shown, along the new ones
  • The user can manipulate the vehicle and give orders to it in an easy way
  • Despite much interference in the broadcast, we manage to overcome most of them by the means described above

Conclusions

  • We can’t identify obstacles placed close to the wall or to other obstacles, since the algorithm of identifying them includes 2 sequential capturing. Therefore, in order to identify an obstacle in perspective, we need to see it from sufficient distance
  • The program can overcome interferences in broadcast, if at least 2 from 3 captured images are good enough
  • A highly saturated color can’t take too big part of an image, since the background will become highly saturated too
  • Changes in the brightness of the image influence the performance of the project and the image processing
  • Dynamic change of the threshold helps to deal with problems of brightness, if the estimated size of the object is known
  • Search for the obstacles by their color saturation, restricts the variety of the obstacles that can be identified, and restricts the color of the surroundings
  • Calculating the distance to the obstacle by its width is going wrong if in one of the images the calculation relies on, the obstacle is seen partially or in a different angle
  • Calculation of the distance from perspective in small areas is problematic, since every small mistake in measuring leads to a big mistake in the distance

Acknowledgement
We are grateful to our project supervisor Johanan Erez for his guidance and all the help.
We are also grateful to Ina Krinski and Aharon Yacoby for all their help.
We are also grateful to the Ollendorff Minerva Center Fund for supporting this project.