Car Thefts Detection

In this project we develop an algorithm intends to detect car thefts. The system uses a stationary camera.

Abstract
In this project we develop an algorithm intends to detect car thefts. The system uses a stationary camera. The algorithm identifies a car alarm being switched on and gives a notice about a possible attempt of theft. The algorithm also recognizes the car’s polygon and marks it on the sampled images. The system is flexible and works with most car models and environments (Day time and Night time) with a high probability of detection and more important, a low false alarm rate.

The problem
This project intends to give an answer to the need of detecting attempts of stealing cars in real time. We focused mainly on cars which are parked in parking lots and specially on underground parking lots. The standard car alarm systems usually turns on some kind of alarm and also turns on all signaling lights of the car. The problem is that the mentioned signals are useless when we talk about close, underground lots. For that reason we developed an algorithm that enables a detection of the alarm and gives a warning to whoever watches the area of the parking which is monitored by the camera.

The solution
We assume that every 0.5 sec there is a flash which takes 0.125 sec. Because we have 25 fps, it takes 16 frames per flicker approx. The algorithm samples every 5 frames and starting from the second iteration the algorithm looks also backward.

1

Signaling Detection
The algorithm of Signaling Detection takes int account the Object’s size, between them and In comparison to others. It also checkes the RGB value of the objects, the distance between them, and their place relatively to previous flashes recognized. Below are two photoes from the process of detection.

2   3

Poligon Calculation
In order to find the Car’s Poligon we need to locate the point in which the car ends.The input for the algorithm is the coordinates of the lights, calculated in the first part.The best way to achieve our goal was to use a simple, gradient based, edge detection algorithm. The concept is finding the gradient by calculating the horizontal and vertical differences between tow adjacent pixels. The calculation is done according to:

4

The first stage was to reduce noise. This was done by passing the analyzed image through a 2D Weiner (A bit more on this filter at the end of this page) filter and the effect is easy to see:

5

Now that the image is less noisy the search of the gradient will be more accurate. The search is performed along N lines. The starting points are located at the line connecting both lights:

6

The next stage was to calculate the search direction according to initial scale measurements.These measurements enabled us to evaluate the meter to pixel ratio.We took the intensity filtered image and calculated the differences between 2 consecutive points along the line, as can be seen in the following image:

7

We saved the maximum results (the gradient) and their indexes in the image. We calculated the average length of different car types that were filmed from the same location and eliminated results that were unreasonable according to those measurements. The final result can be seen in the next picture:

8

Tools
1. Video cameras for day and night filming.
2. Adobe Premier – for sampling the films.
3. Matlab – for implementing our algorithm.

Conclusions
The algorithm worked better in Day time from some reasons:
1.The quality of the sampled images was much better in Day time.
2.It was easier to recognize the edge of the car during daytime because the edges were sharper, which made the gradient search more precise.
We expected that the lights recognition during the night time will be easier, however because if the nature of the camera that we used for the night filming we reached a situation where the lights are hardly shutting down. This made it very difficult to notice the flickers of the parking lights.

Acknowledgments
We are grateful to our project supervisor Sagi Katz for his help, guidance and Patience throughout this work, and we also would like to thank: Johanan, Ina, Eli and Aharon for all their help. We would aslo like to thank the Ollendorff Minerva Center for supporting this project.