HVS (Human Visual System) Oriented Pixel Classifier

In this project we implemented a Human Visual System oriented mechanism that classifies pixels in images.The input images to the machine are Black-White images.

Abstract
In this project we implemented a Human Visual System oriented mechanism that classifies pixels in images.The input images to the machine are Black-White images. We implemented the system as a generic one, which means that it should work successfully for all sorts of images, with minimal assumptions concerning the properties of the input image.
 
2 3

The classification is performed by characterizing each pixel as a constant (background) pixel, belonging to an edge, or being part of a texture. The system takes an intensity image (containing 256 gray levels) of an arbitrary size as its input, and returns a trinary image, i.e. an image containing three gray levels representing the different types of pixels: 0 representing a smooth area pixel (black), 1 representing an edge pixel (white) and 0.5 representing a textured pixel (gray).Ideal output images of our machine will contain one-dimensional, continuous edges (lines, one pixel thick) and as continuous as possible two-dimensional surfaces of pixel identified as textures or background.
 
The General Procedure
4
The machine is supposed to be as efficient for any BW image, and since the edge and texture detection algorithms being used for all the images are the same, there must be specific input parameters for the processing of a specific image. Those parameters are being recognized in the first step:We assumed that the input image might have been noised, by an additive noise:
I(x,y)= f(x,y) + n(x,y)

The input image for our machine is I(x,y) ,the original, ideal image is f(x,y)and the noise is n(x,y). The estimation of the added noise is done first. The operator for assuming statistical parameters of the noise is convolution with the mask:
 
5
 
This mask’s average is 0 and its variance is …. When sigma^2 is the ” variance of the noise. Therefore, A= I(x,y)*N is the result of applying the noise estimation operator on the image. In order to get an estimation for the variance of the noise in the image, “sigma^2”, we divide by 36 the sum of all the pixels in the image A. The calculation is easily done by the formula:
 
6
 
Edge Detection Methods
An edge is usually described as the border line between two areas ian image, with two different intensity level. There is no mathematical formula of an edge, and no unique “correct” solution for an edge detection problem for any random picture. In many images there will be pixels that will be classified as an edge by one person but perhaps not by another.

Most of the important information hidden in a pictured scene is in the structure of the edges, and one can get a general idea about the subject of an image juts by looking at the composition of its edges. Therefore, thouthe difficulty of the process, edge detection is the first, basic step in many image-processing machines.
 
Ideally, the difference between two gray levels around an edge is great and appears in a narrow area. In many cases the change of intensity in a picture is slight through a large space. In such cases, a gradient threshold is determined, based on desired edge profile. In the Fourier domain the edge is characterized by high frequency in the orthogonal direction to the edge, and low frequency in the parallel direction to the edge.
7
 
The method used in the project is the one offered by Canny back in 1986. The Canny method is very common and being used for various applications, even for most modern ones, and for examining new edge detection algorithms.
 
Most important three criteria being taken care of in the Canny method are:

  1. Low error percentage – Avoiding no detection of clear, obvious edges that appear in the original image. Avoiding misdetection of edges which won’t be usually recognized in the original image (a false edge)
  2. Good localization – The detected edge should be located as close as possible to the center of the true edge, which is approximately the local maximum point of the local gradient
  3. One response per edge – This criteria is theoretically included in the first criteria, since two responses for one edge necessarily implies a false edge. However, the mathematical solution of the first criteria doesn’t include the third one

 
The entire Canny Theory is available on the Matlab help pages.
 
Edge improvements
The desired edges image has not “spots” of edges in it, but long, thin, continues lines. Seldom does the edge detection algorithm deliver such results. The solution was using morphological operators, which work well on different problematic parts of an image to and were easily applied on Matlab. The main morphological operators used to get close to an ideal edge imagewere: thinning, erosion, bridging and so on.
 
Textures in general
There is no exact or even well defined description of texture. When one observes sand on the beach, or the bark of a tree, the homogenous visual pattern one observes is texture. It not only corresponds to variations in intensity, but also comes from the variations of the surface characteristics of the item, which can be felt. Texture analysis has been used extensively, from practical uses, in robot and computer vision , to classification of terrain from aerial images.

Texture can be generally divided into two types, namely statistical and structural. For statistical definitions, texture can be defined as the arrangement or spatial distribution of intensity variations in an image. Structural texture is the placement or spatial distribution of a set of primitives in an images based on some predefined placement rules:
8
Examples for statistical and structural textures
 
Texture detection in the project
The texture detection method using the gradient analysis of an image, was used in this project. Taking into cosideration not only the size of the gradient pixel, but also its direction enabled classification of textures (statistical, structural, oriented, mixed and etc.). A pixel with a high level of gradient will be recognized as texture only if the pixels around it have a similar orientation. For every internal pixel (not on the frame of the image), the 3×3 neighborhood block of pixels around it is being examined. The first step is classifying every pixel into one of the groups: constant, directional, textured or mixed. Pixels classified as constant or textured will surely not be a part of the edge. Constant pixels are, of course, being classified as a part of the background. Those classified as directional will have their gradient examined, checking how far they are from a local gradient maximum. Textured and mixed pixels are going througg the “JND Test”.
 
9
The pixel classification process through an image
 
The JND Test
It is known, that the human vision system noticing a difference of the intensity in a small part of an image, is depended on the background intensity. The needed difference between the intensity of an object in comparison to its background’s average intensity – the “Just Noticeable Difference” (JND) varieties corresponding the background intensity. Using a changing threshold, based on the Knowing that computer digital images’ intensity has no linear connection to a mathematical values in their files a decision of using a changing threshold was made. This nonlinear, changing threshold is based on the “Webber Law”:
10
 
Texture improvements
Morphological operators were found to be easy and efficient, as at the edge detection step. Unlike edges, texture is a 2- dimensional surface, therefore, other morphological operators were used, such as: filling, clearing and dilation, in order to achieve a smooth edged surface with few holes in it.
 
The final image
Finally, when having an edge image and a texture image an integrated image is needed. Some pixels might be recognized both as edge and as a texture pixel. In such cases, to top priority is given to the edge detector, based on the idea that most of the information in an image is hidden in its edges.
11
An input image and the machine’s result
 
Acknowledgments
We would like to thank our supervisor Guy Gilboa and Johanan Erez for his support and guidance throughout this project.
Also we would like to thank the Ollendorff Minerva Center Fund which supported this project.