Driving Assistance System: Speed Estimation by Video Camera Installed in a Car

The VISL lab in the Technion conducts a research concerning driving assistance aids.

Abstract
The VISL lab in the Technion conducts a research concerning driving assistance aids. As an infrastructure to such future aids, we tried to estimate the car’s velocity from a video movie taken from the vehicle front window. We tried to do so without any assumptions on the surrounding of the vehicle. The algorithm we used is based on motion vectors calculations and analysis. The algorithm is based on an articles by H. G. Nguyen, Obtaining Range from Visual Motion Using Local Image Derivatives.

The problem
The main problem is that the movie has two dimensions, while we are interested in the third one (velocity in the driving direction – “into” the scene). There is the triplet of the camera motion in its surrounding, the set of the camera surrounding and the flow motion. Knowing two of the three items allows one to calculate the third item. However, there is no way to induce one item of the triplet from only another item.
There were several methods checked in the laboratory, mostly based on several approaches assuming certain assumptions on the car’s surrounding. For example, knowing the dimensions of the dashed line on the road. We tried using motion vectors analysis approach, based on derivatives of the brightness in the x and y axis and in the time (between frames).

The solution
From the expansion of the brightness function of the picture E(x,y,t) to a Taylor series, and substituting the relations between the object location relative to the camera and the object image on the CCD, we can calculate the motion vectors of each pixel in every frame.
After calculating the motion vectors for all the pixels, the set of motion vectors was cleaned from vectors originated in numerical errors. Since we assume that the motion vector of a pixel is created due to a movement of an object bigger than single pixel, there should be few neighboring pixels with similar motion vectors. Cleaning vectors consisting high variance (orientation and magnitude) zones, removes most of the vectors that are due to noise.
1
Figure 1 – The motion vectors calculated in one frame (partial). After the numerical errors filtering (right) and after the spatial variance filtering (left)

Having the motion vectors, it is possible to calculate the division of the orthogonal velocity of an object (orthogonal to the camera plane) by the projection of its distance from the camera onto the car progress axis (W/Z). Then the frame was divided to about 300 blocks. In each block the mean value of W/Z along the 20 frames was calculated.
Taking a group of “train movies” with known car velocity we where able to create a function of mean W/Z value by the car’s velocity for each block. Blocks in which this functions were monotonically ascending where marked as “useful blocks”. For each useful block an invert function of car velocity by mean W/Z value was calculated using “nearest neighbor” method.
2
Figure 2 – The mean values of W/Z by speed velocity function for a specific block (right). The blocks that where marked as “useful blocks” (left)

A movie taken in an unknown car velocity goes through all the mean W/Z value calculation stages. Using the mean W/Z value calculated in the “useful blocks” and all the invert functions, a list of estimated velocities is acquired. The mean value of those estimations was taken as the car velocity.
The entire process is described in the following scheme:
3
Figure 3 – Schematic description of the system

Tools
The movies where taken using a Sony DV camera and captured using Adobe premier. Interlacing was made using Virtual Dub software by Averly Lee using Smart Bob filter by Donald Graft. The project was built on Mathworks Matlab 6.5.

Conclusions
Due to limited number of films, the car velocity in each film was calculated using the other films as train films. From such runs we acquired the following results:

Measured Speed (Km/h) Mean Estimated Speed (km/h) Standard deviation
10 16.178 5.601
20 22.065 6.407
30 30.178 8.177
40 37.700 4.697
50 40.234 4.789

It can be seen that there is a high correlation between the estimated results and the measured velocity. Nevertheless, the limited number of the films we used (60 films in 5 different speeds) and the limited number of scenes may have made things look better.
In order to make sure that the algorithm works, a more thorough check should be done with a stabilized camera, an accurate speed measurement and a larger variety of movies.
Acknowledgment
We want to thank our project supervisor Ehud Orian for his help and guidance along the work.
We would also want to thank all the lab team, especially Ina Krinsky and Yohanan Erez for their support.
We are also grateful to the lab’s stuff and to the Ollendorff Minerva Center Fund who supported this project.