Robocar – An Autonomous Robotic Vehicle

Robocar is essentially a robot which uses an iPAQ for brain, a small camera for eyes and a LEGO Mindstorms vehicle to move around.

Abstract

Robocar is essentially a robot which uses an iPAQ for brain, a small camera for eyes and a LEGO Mindstorms vehicle to move around. This project is somewhat exceptional in its goals and time invested into it. The project aims at three targets:

  • Implementing an algorithm for an autonomous robot that can avoid obstacles
  • Providing for a general, ready-made software foundation and a feature-rich development / testing / deployment environment for future robotic vehicle projects
  • Summarizing and categorizing the different projects done in the field of robotics in the Technion

Background
A lot of effort has been put into autonomous robots in numerous laboratories in the Technion, with none or little cooperation between the teams and no common goal. This project has been equipped with a very potent platform for implementing a robotic vehicle; the task definition has not been dictated so the team was motivated to experiment with the platform and achieve both real results and larger-scope goals which may be of interest to future teams in other laboratories.

Examining what has been done in the past, the chosen task was for the vehicle to navigate its way out of a corridor with obstacles.

Thanks to a relatively large time-frame and the guidance of the project instructors, mainly two different algorithms were explored and developed, each for about a year. The first algorithm proved to be very complex and technically difficult to implement; the second algorithm had been conceived as a result of that experience and has matured to a working solution.

 

Algorithm #1 – the Homography approach

Homography is a linear coordinate trasnformation, that has been used to create a top view of the area from the image captured by the robot:
1
Figure 1 – Left: Original image., Right: Top view after Homography Transformation process

The top view is much easier for navigation extraction algorithms.
Additionally, it has a side-effect on objects that are not laid-down flat on the floor, as can be observed in the previous figure.
The process illustrated below can be used to tell the robot where it should go next.

2
Figure 2 – Block diagram of the process needed to reach navigation decision from the captured image

 

Algorithm #2 – the Color Classifying approach

Basically, this approach assumes that the floor the vehicle is driving on has a given texture. It is designed to be much simpler than the first algorithm and thus more feasible and robust. Texture may be defined in more than one way; the simplest definition is the set of colors that participate in a region.
That “texture definition” needed to be easily calibrated. A histogram is dynamically built during configuration, and then used (on a “top view” of the floor, derived from the same Homography from the first algorithm) in a classification algorithm. The result is a driving command based on what is considered floor-texture and what is an obstacle and on previous movements.

3
Figure 3 – Block diagram of the Color-Classifying algorithm

The Software Foundation

The software foundation was designed from the beginning to be general and useful as possible, in effort to make the work of continuing this project as convenient as possible, allowing future teams to focus on their ideas rather than on technical issues.

The chief features of the provided software foundation are:

  • Good Object-Oriented design
  • One of the implementations is a PC-simulation system
  • Another is a vehicle-controller application
  • Intuitive menu system
  • A ready-made abstraction level on top of the vehicle driver layer
  • Configuration via file – no rebuild needed after re-configuration
  • Persistent and Temporary message displays
  • Message logging
  • Efficient yet easily extensible code

4
Figure 4 – Block diagram of the software component abstraction, which is at the base of the dynamics in all software constellations: the vehicle code, the simulation code and the controller code

Tools

Final algorithm implementations are in C++, release compiled in Visual Studio Embedded and simulation compiled in Visual Studio .NET 2003, whereas feasibility tests are in Matlab. Software packages which were used are described in the project book.


Conclusions

This project was meant to demonstrate an ability of a robotic vehicle, and also last as a foundation for other projects; we feel both of these goals were achieved. The platform has been used to build a vehicle that can make enough correct decisions to cope with a less-than-perfect environment, and future continuation projects may need to spend less time on re-inventing the dynamic model, tackling with the LEGO controller API, rebuilding the project on every configuration change, debugging without logs, developing GUI and spending time on testing on the real platform without a simulation environment.

Some of the possible things future projects could do basing on this work: getting and introduction to robotics, check different algorithm based on the prepared system, building a different vehicle and controller for the current software, develop motion vectors depth estimation algorithm as proposed in this book, develop the complete Homography algorithm, use techniques like homography and registration in other projects, develop a method to overcome reflections.
In addition, we learned the important lesson: The fact that a relatively simple idea can still be challenging enough to implement on one hand, but also be much more robust than a complicated idea, on the other hand. The simplicity and robustness issues turned out to be crucial factors – at the end of the day the simpler idea prevailed.
Acknowledgment

We would like to thank Johanan Erez and Dr. Ilan Shimshoni, our supervisors; the Electrical-Engineering people with which we consulted:
Koby Kohai, Yoram Yihyie, Eli Shoshan, Eli Appelboim, Dr.Alexander Bekker and Dr. Yoav Y. Schechner; The VISL lab staff: Ina Krinski, Aharon Yacobi and Johanan Erez; and finally, Robots-enthusiasts worldwide.

We are also grateful to the Ollendorf Minerva Center Fund for supporting this project.