Autonomic Vehicle Based on LEGO Robot, iPAQ Pocket PC and Miniature Camera

The aim of this project is to build a robot that will use an image processing algorithm for navigation purposes.

Abstract
The aim of this project is to build a robot that will use an image processing algorithm for navigation purposes. This robot will use a digital camera to “”see”” the surface and a pocket PC (iPAQ) as an information processing unit to decide where to go.

 

Equipment & Working environment

  • PAQ-Mobile: is a robot based on ‘LEGO Mindstorms’ vehicle controlled by RCX brick + IR transmitter
  • Pocket PC COMPAQ iPAQ H3900
  • LifeView FlyJacket i3800 expansion sleeve
  • Digital camera Watec WAT-270A
  • Software development environment: Microsoft eMbedded Visual C++

1
Figure 1 PAQ – Mobile

2
Figure 2 iPAQ – Pocket PC

3
Figure 3 FlyJacket i3800

The PAQ-Mobile is a robot based on ‘Mindstorms LEGO’. It has 4 DC motors ,RCX motors control unit ,battery ,digital camera and iPAQ pocket PC. In this way the PAQ Mobile is working in a close control loop and is completely independent to perform any task that it is programmed to do. The Pocket PC is a ‘brain’ of the robot. The image is sampled by digital camera which is located in front of the robot. After sampling, the image is transferred into iPAQ’s memory by FlyJacket expansion sleeve. The Pocket PC is running the image processing algorithm & is also displaying the image on it’s screen. After the decision is done the order is sent from the Pocket PC to ‘tower’ IR transmitter via Serial-RS232 cable, and from the ‘tower’ is translated to the RCX unit which controls the motors of the robot.

About the algorithm

In this project we used an image processing algorithm for navigation purposes. We programmed the robot to perform a certain navigation task – is to find the exit from any labyrinth. The robot has to recognize the path and follow it till the end. It has to take left at every junction it encounters in order to find the solution of the labyrinth (this will not be necessarily the shortest way out). The robot must also recognize 2 kinds of signs and to distinguish between them:

  1. ‘u-turn’ sign ,which tells the robot to perform a u-turn
  2. ‘end’ sign is placed at the end of the labyrinth

4
Figure 4 ‘u-turn’ sign

5
Figure 5 ‘end’ sign

6
Figure 6 The labyrinth

The algorithm can be divided into a few following stages
1. Sampling: The image is sampled in the digital camera and transferred to iPAQ in RGB format 120×160 resolution.
2. HPF unit: The sampled image is filtered in HPF unit to recognize sharp contours in the picture.
3. Quantizer: binary quantization – 1 means sharp edge ,0 means no edge.
4. Information processing & logic: The information that was gathered from the picture is analyzed to find a decision where to go. It is extremely important to distinguish between the real path and the distortions in the picture. We assume that the picture can contain some kind of distortions & noises which are unpleasant. Among the sources for those distortions can be: non-homogenous color of the floor, dirt on the floor, noises in the CCD etc… The main difference between the noises and the real path is the fact that the noises are not continuous so, in order to eliminate those unpleasant effects we ignore all the un-continuous edges in the picture.
Each time the deviation from the center of the path is computed. Deviation is a value in [-1,1] ,negative for left deviation & positive for right. The decision where to go is accepted according to this value. Special care is taken in the following cases:

  • Junction encountered -> take the left path
  • No path found at all -> the robot got lost
  • One of the signs was recognized -> obey the sign

The following figures depict the output screen of the iPAQ while the robot is working. In each figure the picture from the left is a real image (as it is captured by the camera) ,and the picture from the right is the processed edges picture.

7
Figure 7 iPAQ display (A)

8
Figure 8 iPAQ display (B)

The value printed in red in the upper right corner of the display is ‘deviation’ (as it was discussed previously). In both cases the deviation is relatively big & negative -> it means that the robot is about to turn left. In case (A) the deviation is negative because the robot is not straightened on the path and in case (B) the robot is in the junction.
As you can see, the edges in the processed picture are classified to several categories ,each category is represented by different color.

  • black – is representing the chosen path which the robot follows.
  • red – is representing the path segments which are located on the right side of the chosen paths. The robot can ‘see’ those segments but it doesn’t take them into account (as it was already mentioned the robot always prefers the left path whenever it is possible).
  • blue – is representing the path segments which are too far, the robot can ‘see’ those segments but it doesn’t take them into account. However those segments will important when the robot will get closer
  • green – is representing the noises & distortions in the picture

Summary

One of the main difficulties in this project was to build & program the robot in the way that it will work in real-time. From the aspect of execution time the ‘bottle neck’ of the system is the IR communication between the RCX and the tower. It takes several tenths fractions of second to transmit a message to RCX & to get back a confirming answer. As a result the program spends most of the time in ‘busy waiting’. To solve this problem a better motors control unit needed…