The purpose of this project is to enable face recognition from one 2D image when there already exists a database of canonical images – one per subject. We chose to start from the EigenFaces algorithm.
The purpose of this project is to enable face recognition from one 2D image when there already exists a database of canonical images – one per subject. We chose to start from the EigenFaces algorithm. We proposed that in order that the EigenFaces algorithm could recognize the test images correctly, they have to be canonical too, just as the train images are. Thus the purpose of the project is to make test images canonical, when the test images can be taken from any 3D angle, with any facial expression, and may be wearing sunglasses etc.
Without any 3D input or models we have to transform any 2D image of a person, under any condition to his basic frontal image, when his identity is obviously unknown.
The problem seems hard enough but there are still a few complications:
– If the azimuth angle is large enough, a whole side of the face does not appear in the test image and cannot be reconstructed.
– Some parts of the face may be hidden by the nose or sunglasses and cannot be reconstructed.
– In using traditional methods (EigenFaces, Correlation) a face of a person taken from a large angle can often look more like the standard image of another person.
We assume Brightness Constancy between the train and test images of a person. This means that all pixels remain at the same brightness level, and only their location may change, which allows us to assume the train image is a coordinate transformation of the test image:
(2) . We also assume (in order to make the problem computable) that this transformation is polynomial, with rank up to 3.
We then use optimization methods where we simultaneously calculate the optimal parameters of the transformation between the test and train images, and also the EigenFaces vector of coefficients which defines the identity of the test image.
Before we made any changes to the test images the recognition rate was 19%
After we made the images closer to canonical by finding the eyes the rate was 45%
After we used the first automatic algorithm described the rate was 54%
After using the robust optimization method in the second part the rate was 75%
Another qualitative comparison was made with the FACEVACS 4.0 face recognition module by Cognitec, which was the big winner in FRVT 2002 in the scenario relevant for this project.
The pictures show precisely what is the added value of the optimization process, and exactly why the recognition rates are that much higher after using it.
The outliers images show that the algorithm found automatically which parts in the face are very different from the train image.
We used a standard PC running Matlab in order to implement our algorithmic solution. Even our most complicated solutions only took a few seconds per image, so there was no need to use any other tools. We also used an OPENCV module for the initial face detection, which was run from Visual C.
In this project we empirically proved our argument that EigenFaces fails to deal with images with severely varying pose and facial expression changes.
We also proved empirically our proposition that some sort of canonization is needed and may improve the recognition rates.
We developed a novel algorithm for automatic face canonization, implemented it in Matlab and tested it on a database of face images.
We showed that our algorithm increased the recognition rates considerably, and also that it produces automatically a transformation which makes the image look closer to canonical.
I would to thank my supervisor, Tomer Michaeli for his devoted assistance during the whole project, from the idea stage, through the algorithmic and implementation states. We are also grateful to the Ollendorf Minerva Center Fund for supporting this project.