Short-range images acquired using a regular camera usually contain objects that are focused but also blurry objects due to the physical limitations of the lens and imaging system.
Short-range images acquired using a regular camera usually contain objects that are focused but also blurry objects due to the physical limitations of the lens and imaging system. In this project, we investigate methods and techniques of restoring a defocused image, given the additional information on the depth of every object in it. The issue was approached by sectioning the image into approximately evenly defocused areas, restoring each one using various techniques, and sewing them together with minimal boundary effects. Experimental results show success in restoring the image degraded up to a certain level, similar to the results reported in the literature. The advantages and disadvantages of each method were discussed.
A photographer usually focuses the camera on one object placed at a certain distance from the camera’s plane. Geometrical optics show that other objects, placed at different distances from the camera, will be blurred, wherein the blur radius depends on the distance of the object from the camera. Thus, the problem of restoring the fully-focused image is a space-variant problem. It should be noted that full restoration is not possible due to the fact that the blur operator is a sort of low-pass filter. Therefore, exactly reversing its action is impossible, but it is possible to enhance some aspects of the image.
The space-variance problem was approached with the following manner. According to the depths map of the objects in the picture, the image was sectioned into large areas with similar blur, using depth of focus resulting from the geometry of lens configuration. Afterwards, each section was restored in a space-invariant manner. Then, the restored sections were sewed together to compose the fully restored image. The restoration issue was dealt by utilizing different algorithms of image restoration. They differ in both the model for the blurring kernel and the way it is reversed. Two models for the blurring kernel were a gaussian model and a calibrated kernel, which was extracted experimentally. Three methods of reversing the kernel were Wiener filtering, S-transform and Beltrami algorithm. The sewing was done by dilating the closer depth regions in the depths map on the expense of further regions, in order to incorporate also the blur radius into calculations. Before restoring each depth, the closer depths in the image are zeroed. Then the image is dilated on the expense of those zeroed areas in order to minimize boundary effects. After the restoration, only the exact depth region is cut and added to the final result.
Matlab 7.1 was used for utilizing the different algorithms, and operating them. The images that were used for testing the algorithms were taken by a Nikkon D100 digital camera, with an AF Nikkor 35mm 1:2D lens.
The experimental results were successful. A few adjustments were added to the algorithms, in order to improve the results: S transform algorithm was followed by noise filtering, and the kernel calibration process was modified in order to make it more accurate, robust to noise, and improve its result. The figures below demonstrate the results obtained using the different successful techniques.
Figure 2 -Results of sharpening with S transform. Raw result on the left and filtered result on the right
Figure 3 -The results of using calibrated blurring kernel. Weiner filtering on the left, and Beltrami algorithm on the right
As demonstrated in the results, the technique of calibrating the blurring kernel using reverse Abel transform, and then restoring the image using Beltrami algorithm was the most successful. The major setback of this method is that it is very computationally costly. Its second setback is that an awful lot of information is to be stored in order to calibrate the camera to any configuration. With the price of a less effective restoration, one can avoid those problems with using S-transform algorithm. The Gaussian estimation of the blurring kernel yields the worst results, and therefore it is said to be not effective.
It is suggested to try and approach this problem from the direction of interpolating blurring kernels and then restore the image with a single iteration. It is also suggested to try and make some temporarily calibration of the camera, in order to avoid storing information about its blurring kernel. The last suggestion is to apply the calculation of the depths map, before the restoration, using depth from focus/defocus techniques.
I wish to thank Hilit Unger for supervising this project, Ran Kaftory, for letting me use his implementation of Belttrami algorithm, and to Johanan Erez, Ina Krinski and Aharon Ya’acobi, for supporting technically.