Lego Vehicles Chase using a Dome Camera

The project's mission is to build mobile robots based on LEGO-MINDSTORMS vehicles and design and implement an algorithm.

Abstract
The project’s mission is to build mobile robots based on LEGO-MINDSTORMS vehicles and design and implement an algorithm which controls the movements two such vehicle such that one of them tracks and follows the other until it manages to catch it.
The “chased” vehicle shall move Independently in a closed environment, while the “chaser” vehicle is controlled by a computer and monitored by a dome camera.
The objectives are – to build the vehicles and produce a platform for controlling them,to operate the camera, and finally to make the “chaser” catch the “chasee” as accurately and quickly as possible using these means.

Tools & Environment (some technicalities)

  • The 2 vehicles were built from a LEGO-MINDSTORMS kit and equipped with an RCX. An RCX is an electronic device which allows control of LEGO motors via computer. The RCX can be loaded with programs and can communicate in real time with a computer using a tower

12
Figure 1 Dom camera                                                             Figure 2 The RCX unit                                    

3
Figure 3 A vehicle with the RCX

  • The dom camera is hung above the working environment, filming it, and is connected to a
    computer. The input is operated by Camcapture (a MFC based program) ,which allows
    streaming pictures to be sampled
  • The RCX can be loaded with NQC programs to enable control of the vehicles
  • The area which the is surrounded by black stripes of duct tape ,as seen in the picture. The
    stripe’s use will be shown ahead in this page

The development process
Due to the nature of the equipment we worked with , we decided to develop the resources for the
project first, and then design an appropriate algorithm.The resource development was done in two
“fronts” : working the camera and picture analysis , activating the lego vehicles using C++.

1. Working the camera and picture analysis
First thing we did was learn how to display a picture on the screen using CamCapture and MFC.
This was done using other projects done in the past in the VISL lab (see References at the end of this page).
Our primary goal in the picture analysis was to find the vehicles in the pictures at any given time and to estimate their direction.To achieve these goals we decided to process every picture in the following way:
First we would sample a background picture.From every new picture we sample we will subtract the background picture and get a “differences” picture. On the “differences” picture we will perform a color based filtering-taking only pixels in a certain color(in the RGB sense) region, and we get a binary picture where 1 means there is a pixel and 0 means there isn’t one. On this new picture we apply an algorithm which turns adjacent point groups into closed objects. For these new objects we calculate such figures as : number of pixels, center of mass , corners locations,etc. For better filtering we take into account only objects with more pixels than a certain threshold we selected. After doing all this we should now have in the computer’s memory
an object picture containing only two objects (the vehicles) and we can also tell their position (according to center of mass) and their approximated direction. The whole process is demonstrated in the following pictures:

4 5
Figure 4 A sampled background picture                 Figure 5 Cars as seen by the dome camera
6 7
Figure 6 Difference between new pic and background    Figure 7 Binary picture after color processing
8 9
Figure 8 Symbolic demonstration of making object pic    Figure 9 Objects pic before removing small objects
10
Figure 10 Objects pic after removing small objects

2. Activating the lego vehicles using c++
As said before the RCX’s can be loaded with NQC programs which allow it to be controlled.
The RCX can activate the car’s engines and recieve data from it’s sensors.
We used Phantom-ActiveX – a C++ object, and an RCX tower with a USB connection. With that equipment it was possible to control both cars simultaneously. Two manners of control were developed for the chased vehicle: automatic (no connection to the computer) , and manual (the user controls the vehicle using the computer’s keyboard).
For the autonomous control The “”chased”” car was loaded with two light sensors pointing downward (see figure 12).
The program which was loaded into it told it to go straight ahead (two engines forward) until one of the sensors sensed the floor is darker(which means it hit the tape stripe), and then it will turn away to the other side. This makes sure the vehicle is always moving and it doesn’t get out of the camera’s sight.
For the user defined control the vehicle was loaded with a more complicated program : what the car does is determined by a variable (the car can move forward,backward, turn to the left or to the right and stop).
This variable’s value can be changed using C++ and Phantom via the RCX tower in real-time. This way we could control the car at any time.
The “”chaser”” car was controlled in a similar method to the “”user control”” system used in the “”chased”” vehicle, but it is not controlled by the user but by the program itself.
Our final software is able to switch between manual and automatic control at the user’s choice.
11  12
Figure 11 RCX and control tower (COM)    Figure 12 Chased vehicle with sensors on – reaching tape line

To the chase
After developing the required resources all we had to do was develop an algorithm, implement and test it. We went for the simplest possible algorithm – the “chaser” vehicle attempts to catch the mass center of the “chased” vehicle. In implementing this algorithm we encoutered several problems:
1. There was difficulty in determining the direction of the “chaser” vehicle.
2. it was impossible to set the “chaser’s” direction to face the “chased” vehicle’s mass center exactly.
3. the overhead lights shone brightly over the vehicles causing the program to be unable to locate the cars in certain areas (see figure 13 for visualisation).
4. We needed to let the software know which car is the “chaser” and which is the “chasee”.

The first and third problems were solved by putting two lego boards on the “chaser” vehicle. We put a red one in the front and a green one at the back. This way we could determine the direction as the direction between the two boards (this was tested empirically and found efficient), and the boards also made the car bigger, and therefore there couldn’t be a situation when the whole car is in the light, and this solved the third problem(figure 14). The Second problem was solved by defining a zone around the exact correct direction where the algorithm will decide the vehicle is facing the correct direction (empirically we found that a 60 degrees zone give efficient results).
The fourth problem was solved simply by letting the user tell the system which vehicle is which, and the system will keep track of the vehicle’s identity.
13 14
Figure 13 Vehicle in lights before installing boards Figure 14 Vehicle in lights after installing boards
15
Figure 15 The “chaser” vehicle’s decision zone

The whole system
In general, the system will work like that(after sampling background and determining the
vehicles’ identity):
1. Sample picture from camera.
2. Analyse picture as specified above .
3.1. If “chased” vehicle’s center of mass is in “chaser” vehicle’s decision zone-move forward.
3.2. If “chased” vehicle’s center of mass is not in “chaser” vehicle’s decision zone and is to it’s right – turn to the right.
3.3. If “chased” vehicle’s center of mass is not in “chaser” vehicle’s decision zone and is to it’s left – turn to the left.
3.4. If cars are close (according to a predefined distance) – stop vehicle.
4. return to 1.

16
Figure 16 Cars in chase

Summary and Conclusions
The project was successful and we saw that the vehicle actually catches the other one time after time. The project was an educational and fun experience in using available resources to perform a specified task.
We had a few ideas for future development which we would not have the time to implement (such as designing a second car with an escape algorithm, putting in more than one chaser and/or more than one chasee, etc…).
We hope some day those projects will be carried out in this laboratory.

Acknowledgments
We’d like to thank our supervisor and lab engineer – Johanan Erez.
We’d also like to thank the lab assistant Dror Ouzana who helped us work the LEGO vehicles.
Our thanks for the VISL lab staff who made this project possible.
We’d like to thank also other students who carried out LEGO projects in the lab for sharing thoughts.
We would also like to express our special gratitude to the Ollendorff Minerva Center for supporting this project.

References, code and additional stuff