Shadow Detection and Removal Research

The area of shadow detection and removal has made great progress in recent years and many works and papers try to deal with this difficult but yet intriguing problem.

This project won the Distiguished Project Award of VISL 2005

Abstract
The area of shadow detection and removal has made great progress in recent years and many works and papers try to deal with this difficult but yet intriguing problem. In the following project we present three new approaches in the study of shadow removal based on the works of Yair Weiss in 2001 and Graham D. Finlayson, Steven D. Hordley and Mark S. Drew in 2002 that were described in our previous project, and therefore it is recommended to understand the ideas, algorithms and problems that are presented there in order to fully understand the motivation to our current work.
Not all of these approaches are successful and none of them has been fully developed and explored, but we feel that some of the ideas that are described in this work have good potential for a further study and can lead to new and improved techniques in this field.

 

Background:
The previous algorithms which were examined (Weiss, F.H.D.) have several disadvantages and limitations that require further development.

  • Shadow Detection – a better definition of the shadow in the image can lead to a better removal.
  • Incomplete Removal – both algorithms doesn’t remove all the shadows completely and usually there are artifacts at the edges of the shadows.
  • Vague Shadows – due to the reconstruction method, soft and vague shadows are treated poorly.
  • Imperfect Reconstruction – the reconstruction process is imperfect, especially in the image boundaries, and lacks in retrieving the DC factor of the reflectance image.

 

The following approaches are addressed to resolve these limitations.

 

The solution:

Three new approaches and one expansion were observed in this project:

Mutual Information (MI):

This approach was designed to detect the areas with shadows. In the F.H.D. algorithm, a simple edge detection was applied on the original and the invariant images and the resulting output was compared. This method resulted in an incomplete shadow borders. The new approach uses MI as a better connection between the two images to detect the shadow.

Mutual information is the amount of information that can represent two images:

MI(X,Y)=H(X)+H(Y)-H(X,Y)
Where H is the Entropy (single or mutual)

MI uses the Probability Distribution Function of two images to establish the correlation between them. In theory, better correlation is defined by lower MI value. The input for each pixel is a window of 30×30 pixels around the original pixel from the original and invariant images.
Because the PDF is unknown, we used the Parzen window method to estimate a continuous PDF from the images’ histograms.

 

1

Figure 1 – Example #1 of MI effectiveness

 

2

Figure 2 – Example #2 of MI effectiveness

At this stage the results are not encouraging. It seems that the MI produced results that are not better then using a simple edge detection method. The results are not systematic, as can be seen in the examples (the shadow edge appears in example #1 and doesn’t appear in example #2).
Nevertheless, we believe that this approach deserves further attention because it has a solid theory which may conclude in better shadow detection.

 

Image Repainting:
This approach uses the invariant image (from the F.H.D. algorithm) as a template in order to eliminate the need for shadow detection.
The theory is that a certain surface color should receive a certain value in the invariant image, regardless of the lighting condition (as long as it is a Planckian light source). This value is unique for each surface color. By clustering the pixels that have the same value in the invariant image, one can deduce the original color that matches this value from the color taken from the corresponding pixels in the original image, and thus repaint the invariant image.

 

3

Figure 3 – Example of Image Repainting

In this case the theory is quite far from reality. The surface color does not fall in the same value in the invariant image for every instance of it in the original image. Therefore the clustering process was not done properly, and the color was not assigned correctly.
This method is very sensitive to the input image and suffers from the same disadvantages as the invariant image, such as incomplete shadow elimination.
We think that this approach has exhausted itself and further research will not improve the results.

 

Shadow Surfaces:
This algorithm approaches the problem of shadow removal from a different direction. Assuming the shadow is already detected the main problem is to separate the luminance image from the reflectance image while estimating the luminance image directly.
From computer graphics we learn that the color of an object is the total illumination multiplied with the surface color. The illumination is combined from ambient, diffuse and specular lighting. We are interested in the diffuse lighting, which is defined by the light source and the angle the light hits the surface.
The basic idea is that by using two (or more) images with similar lighting condition, one can fit a polynomial surface that will describe the illumination in the area of the shadow. Diffuse lighting on a plane needs a 1st degree polynomial approximation, but we used higher order surfaces to compensate for changes in the scene.

 

4

Figure 4 – Example of Shadow Surfaces

 

The reconstruction within the shadow boundaries is quite good. There is a problem in the edges which results from two main factors: the shadow detection was not accurate enough, and the lighting at the edges can not be treated as part of the approximated surface.

5

Figure 5 – Local Shadow Surface

 

When the reconstruction is done locally, the method produces excellent results (again not in the shadow edges).

The main advantages of this method are that there is no need to calculate derivatives or DC factor and it avoids smears in the reflectance image.
We think that this approach definitely requires further study, especially in automating the whole process and refining the reconstruction in the shadow edges.

Image Mirroring:

This section deals with an expansion for the reconstruction by derivatives process which both the Weiss and the F.H.D. algorithms can use. This reconstruction has poor results at the boundaries of the image. This phenomenon can be explained if we consider that both algorithms remove edges which belong to a shadow. However, at the boundary of an image such edges, which consolidate with the boundary, cannot be removed since the image boundaries are cannot be removed.
To override this problem we mirrored the original image set so that no original shadow edge consolidates with a boundary of the image.

6

Figure 6 – The mirroring method

 

7

Figure 7 – Example of the mirroring method

The use of this method removed the artifacts at the edges of the images, although the results are not always evident.
It is a necessary step towards a unified removal of the shadows.

 

Tools
All of our methods were implemented in MATLAB running on a personal computer.
The images for the Weiss algorithm were taken in a Nikon Coolpix 995 digital camera and the images for the F.H.D. algorithm in a Nikon D-100 digital camera.

 

26                                  27
Nikon Coolpix 995                                                     Nikon D-100

 

Conclusions
Unlike many other projects, a significant part of this project was analyzing the disadvantages and limitations of the previous project’s algorithms, and coming up with ways and methods to overcome them. As can be seen, all of our suggestions haven’t been fully explored, but the initial results of some of them are encouraging.

The MI approach may produce better results with parametric models instead of the Parzen window method, and the shadow surface results certainly worth further development. The Mirroring expansion is a useful addition although it is memory consuming, and a little tweaking may improve that limitation as well. The image repaint algorithm’s theory isn’t strong enough and the results aren’t satisfying as well.

In conclusion, we believe that the future algorithms for shadow detection and removal should be based on the image contents instead of statistical or physical properties of shadows, and more sophisticated techniques from pattern recognition should be used to achieve accurate shadow detection which is vital for the removal process.

 

Acknowledgments
We would like to thank our supervisor, Dr. Yoav Y. Schechner for allowing us to express our creativity in this project, and to the VISL staff for their support and guidance during the project. We would also like to thank Mrs. Sarit Schwartz for her contribution to the project, and to the Ollendorff Minerva Center Fund for supporting this project.

 

previous project