Navigating of Autonomous Vehicle According to Vision

The goal of this project is to build a robot that will use an image processing algorithm for navigation purposes.

Abstract

The goal of this project is to build a robot that will use an image processing algorithm for navigation purposes. This robot will use a digital video camera to “see” the surface and a pocket PC (iPAQ) as an information processing unit to decide where to go.
Our design is inspired by the work of Nissen et al. (2002) on the iBOT.
The project includes image processing using the pocket PC and communicating between the iPAQ and the Lego RCX controller.
The communication between the iPAQ and the RCX will be based on the Infra-Red interface that exists in the RCX controller and the IR port in the iPAQ.
The image processing algorithm is designed to interpret an image of a corridor and decide on the next move’s direction in order to stay in the middle of the corridor. In order to identify the corridor’s direction we make use of edge detecting filters and primitive use of templates.
We also used “logarithm picture” in order to decrease the impact of reflacted light from shining surfaces.
Background

Steffen Nissen, Steffen Larsen and Sidsel Jensen (2002) presented a Linux based image processing autonomous vehicle in: Real time image processing on an iPAQ based robot.
We were inspired by their design to build a similar platform and perform more advanced image processing.
Instead of the Linux platform we chose WinCE as the operating system for the iPAQ. This choice enabled us to use the Microsoft eMbedded Visual C++, which is a very comfortable environment for C++ programming and debugging.

Basic approach
We may devide our work into two main issues:
The first obstacle to overcome was closing the control loop between the iPAQ and the RCX. We wanted to take advantage of the IR port in the iPAQ and transmit from it to the IR reciever in the RCX.
The second issue we wanted to address was devicing and testing a new algorithm for the vehicle’s navigation. We didn’t want the vehicle to trace a line or other mark on the floor. Instead we wished it would be able to navigate autonomously in a coridor.
Tools
In order to compile and debug the C++ code on the iPAQ we used the Microsoft eMbedded Visuall C++
We find it very difficult to debug the IR signals, so we borowed from the lab of nonlinear optics an Infra red sensor and a scope.

1
Figure 1 – IR sensor and scope
We implemented our algorithm for image processing in Matlab. A digital video camera was used to produce video clips of various coridors on which we tested our algorithm.
Conclusions
Due to process schedualing problem we couldn’t make the iPAQ transmit a legal packet without a break in the middle. This problem prevented us from closing the control loop and testing our algorithm on a functioning autonomous vehicle.
In order to avoid this obstacle one should either work under Linux environment or use a serial port for external IR (see Autonomic vehicle based on LEGO robot, iPAQ Pocket PC and miniature camera.)

The end of hall was easy to spot when taking into account the properties of a perspective view. When looking towards the end of hall none may notice the convergence of lines toward the end. This convergence creates high spatial frequencies at the end of the hall. The algorithm utilizes primitive template matching to locate the area which contain the high frequencies.

Acknowledgment
We are grateful to our project supervisor Johanan Erez for his help and guidance throughout this work.
We are also grateful to the Lab of nonlinear optics for providing the IR sensor and scope.
Many thanks go to the Ollendorff Minerva Center which supported this project.