Autonomous Vehicle Navigation, through Corridor using Computer Vision

Robot navigation through corridor using computer vision is known and important problem. Given ultimate solution to this problem, building indoor robots will be easier.

Abstract

Robot navigation through corridor using computer vision is known and important problem. Given ultimate solution to this problem, building indoor robots will be easier. In our project we explored various techniques and approaches to this problem. We have designed theoretical algorithms, that work in many hard cases. When approaching this problem it is important to mention , that corridors are very different one from another in many aspects. Thus building ultimate algorithm is a very hard problem. We tried our algorithms on a very hard case: 6th floor corridor (near VISL Lab).

Approaches we tried in our project: Follow line between floor plates, that goes along the corridor; Follow far wall in the corridor view. Detecting line on a given floor is very hard , so all the algorithms we designed are using 2nd approach.

Background

In winter semester 2004 there were 2 projects completed in the lab , that tried to build a robot, which navigates according to vision. In our project we used robot built by Alex Yufit & Maksim Shmukler in their project. Our mission was to design more general and autonomous algorithm for robot navigation through corridor.

Basic approaches
We designed three theoretic algorithms for navigation through corridor using corridor view. Main mission of any algorithm, that uses corridor view, should be to find the center of the far wall. If the center of the far wall is given in each frame, robot should correct his trajectory in such a way, so that center of the far wall will be as close as possible to center of the image. If the center of the far wall is far enough from the image center and it’s on the left of the image center , then robot should turn left. If the center of the far wall is far enough from the image center and it’s on the right of the image center , then robot should turn right.

We used different techniques to find center of the far wall: Search for areas with highest frequency of the texture; Optic flow based technique; Search for lines, that are parallel to corridor direction: their virtual intersection should be in the center of the far wall.

1

Figure 1 Corridor view

2
 Center of the far wall as found by our algorithm Figure 2 Floor View
3

Figure 3 Lines , that going along the corridor

4
Our algorithm output Figure 4 Areas with highest texture frequency; Our algorithm output

Tools

  • We used Matlab for testing of our theoretical algorithms
  • We used Microsoft Embedded studio to build basic program of image capturing and robot control
  • We used IPAQ and FlyJacket to control the robot and capture images from the camera

Conclusions

In our project we designed theoretic algorithms for robot navigation through corridor, using vision. However, there is a long way from those algorithms to a real robot, that runs those algorithms in the real time and succeeds to navigate through any given corridor. Theoretical algoritms are handling only image processing, while in the real robot there are other aspects as communication and control of the robot.

Collaboration of image processing with online control can lead to many hard cases, that are not handled good enough by theoretical algorithms. In addition, image processing algorithm failure even in the rare cases , can lead to a bad control.

Conclussion is that control should not be streight forward conclusion from output of the image processing algorithm, but should consider that algorithm might fail in some cases. Another conclusion is that information about control decisions and image processing algorithm output in the previous frames could be usefull to image processing algorithm (Enhance the image processing algorithm by prediction of the probable output).

Acknowledgment

We are grateful to our project supervisor Johanan Erez for his help and guidance throughout this work.
We are also grateful to the Ollendorf Research Center Fund for supporting this project.