In the present project we designed and implemented a presence tracing system for objects in a room. The system recognizes an object and marks its location on the map of the room.
Abstract
In the present project we designed and implemented a presence tracing system for objects in a room. The system recognizes an object and marks its location on the map of the room. Each camera “sees” only a two-dimension picture while the map shown by the application represents the third dimension – the depth dimension.
implementation
The implementation is executed in three phases:
A) Sampling pictures from two video cameras. Using an application for video signal sampling which enable to sample picture in the desirable moment into the clipboard
B) Image processing
Object recognition and finding the angle from each camera to its direction.
This phase contain 4 main parts:
I. Isolation of the object from the background. That is done by comparing the present picture with a static background picture of the room. This enable us to find the objects which are different from the background.

II. Separating between different objects and choosing the object we focus on. This is done by scanning the differences picture we got in the previous part, finding groups of pixels classifying them to objects. For this implementation we focus on the biggest object.

III. Finding the distortion of the camera’s lens and fixing the location of the object accordingly. We compared the distorted picture which was pictured by the camera with the original picture, and found the coefficients of the camera’s distortion. By those coefficients we can fix the location of the object.
IV. Calculating the angle towards the object from its location in the picture. We photographed a picture in a known length thus we found the number of pixels for each degree in the picture. By this number we can transform the object location to its angle.
C) Geometric calculation by the angles and the locations of the cameras in order to find the object’s location in the room and marking it’s location on the room’s map.

The implementation contains two applications:
1) CamCapture – samples pictures from the video camera. We operate it once for each of the two cameras simultaneously.
2) Locator – processes images, calculates the object location and marks it on the map. This application manages the two CamCapture applications and the whole algorithm operations.

We had to stand in requirements of scaleability and compatibility in order to enable coping with a changes in the environment’s conditions and the camera’s location. Thus we developed the system modular approach by using a number of modules and Object Oriented Programming (OOP).
That’s in order to ease changes executions and supporting compatibility to a future requirements.
In order to enable changes in the location of the cameras and the cameras parameters influence on the image processing, we built a Graphical User Interface (GUI) which enable easily changing of the parameters during the program execution. Thus we can investigate and find dynamically the optimal parameters.
The following picture shows an example of the Locator execution.

Acknowledgments
We would like to thank our supervisor Johanan Erez and the VISL laboratory for their support and guidance and to the Ollendorff Minerva Center which supported this project.
