Vision Based Pursuing of Moving Vehicle from Bird’s View – PART II

This projects continues our first project.
On the first project we used a ceiling camera to control one LEGO vehicle that chases another preprogrammed LEGO vehicle.

Abstract
This projects continues our first project
On the first project we used a ceiling camera to control one LEGO vehicle that chases another preprogrammed LEGO vehicle

Improvements made in this project:

  • Improved object recognition
  • Control of 2 vehicles at once
  • Addition of obstacles
  • Accelerated speed of the vehicles
  • Improved work environment (both of the code and the user interface)

To demonstrate the systems capabilities we implemented a “”PacMan”” like game.

One vehicle is headed to a fixed location (“”home””) while the other vehicle follows the first one and tries to catch it. If the first vehicle reaches its “”home”” it becomes the “”catcher”” and the second vehicle starts heading to its home. If one vehicle catches the other the program halts and announces about the catch.

Work environment

  • 2 ‘LEGO Mindstorm’ vehicles controlled by RCX brick + USB IR transmitter attached to a PC
  • Ceiling mounted dome camera (WATEC 207-CD)
  • PC + video card. (Pentium 4 & ASUS Geforce V7700/64 VIVO graphics card)
  • Software:  The program’s code is written in C++ and contains 2 ActiveX modules:VideoOCX – supports video capture and  manipulation
    Phantom  – supports the RCX IR communication

1          2
Figure 1 The LEGO vehicles                                                                                          Figure 2 The RCX unit

The User Interface
3
Figure 3 The user interface

Yellow and Green crosses mark the vehicles and the home base of the running vehicle

Red cross marks the red dot of the vehicles

blue cross marks the next move if “”obstacle mode”” is initiated

the white X’es mark the obstacles

A counter on the upper left corner indicates the sampling speed.

In order to run the program you should:

  1. Download the nqc programs to the bricks (yellowrover.nqc and greenrover.nqc).  make sure that the correct firmware is installed (firm328.lgo).
  2. Place the obstacles, turn on the bricks and press “”Init””
  3. Insert the vehicles and press “”Run””
  4. “”Change roles”” will manually switch the vehicles roles
  5. “”Stop”” will halt the vehicles, Press “”run”” again to continue

The Algorithm
Upon initiation the empty field picture is being sampled. in this phase obstacles are being recognized. This is done only once.
Then we run a loop that has 5 stages

1. Picture sampling
2. Objects separation
3. Recognition of the vehicles
4. Path planning
5. Movement control

Initiation
in this stage we save the background picture, determine the location of the obstacles and run the bricks program.

the picture turns into a low resolution matrix. each “”super pixel”” represents 10*10 pixels of the original picture. this is a binary map that mark obstacles and border pixels as “”1″” and “”accessible”” pixels as “”0″”.

4                 5
Figure 4 The original background pic                              Figure 5 manipulated background pic

The main loop

1. Sampling
The program samples the picture in 24 bit RGB format using 320*240 resolution.
this picture can be accessed and manipulated easily thanks to the VideoOCX module

2. Object separation
We subtract the background pic from the sampled pic. The difference matrix is being scanned pixel by pixel. if one of the pixel’s components (R,G or B) exceeds a certain threshold we save it in a list and then we check its neighbors using the “”Region growing”” method. When a pixel enters the list we transform its absolute RGB values to one Hue value (using RGB to HSV transformation). We eliminate small lists and save each object’s (list) attributes: size hue and position of the center of mass.

3. Recognition of the vehicles
Both of the vehicles specification are known in advance (average hue and size)
Now we need to match them to the found objects. we do it by grading the found objects according to their similarity to the vehicles. the highest graded ones are considered as the vehicles.

4. Path planning
First, we need to decide if there is an obstacle between the vehicle and its target. if the path is clear we will move towards the target directly. if not we implement a “”maze”” algorithm. We find the shortest path using the algorithm. the “”target”” direction will be the direction of the first step we need to take in the path.
Detailed information about the algorithm can be found in the project’s book.

5. Movement control
The RCX block has 3 inputs and 3 outputs.
The LEGO vehicle has two wheels, each wheel is connected to an engine.
Each engines is controlled by a different RCX output and has 8 speed levels.
The RCX code controls the engines speed.

We designed 3 modes of movement for the vehicles:

  1. Forward – the 2 engines are set to level 7 ( full speed)
  2. Curve left / Curve right – one engine is set to level 1 (minimal speed) and the other is set to level 8 (full speed)
  3. Spin – one engine is set to level 7 (forward) and the other is set to level -0 (stop)

Calculating the movement direction
6
Figure 6 Calculation of directions

a : The catcher’s direction is (in degrees) determined by the vector generated between the catcher’s center of mass and a red dot attached to its rear side.

b : The target’s direction is determined by the vector generated between the catcher’s Current position and the target position

g : The movement direction is the difference between the two angles

g varies between -180 and 180 degrees
if  45 < |g| < 180 spin right/left
if  10 < |g| < 45 curve right/left
if  |g| < 10 move forward

Catch
When the ‘catching’ vehicle reaches the target the two objects unite into one big object. When this happens the vehicles stop and a yellow/green “”CATCH”” message appears.
7
Figure 7 snapshot of a catch

Results
One of the most important aspects of this project is the sampling rate. a lot of effort was made in order to make the calculation and transmition time as short as possible. The sampling time we achieved was around 1/2 of a second (2 Hz). This rate allowed us to achieve satisfactory results as demonstrated in the attached demos.

Acknowledgment
We are grateful to our project supervisor Eli appelboim, the lab engineer Johanan Erez and the lab assistant Dror Ozana. We would also like to express our gratitude to the Ollendorff Minerva Center Fund for supporting this project.