The area of Digital Terrain Model matching has been neglected for the last decade. Not many have tried to deal with this interesting and practical problem.
The area of Digital Terrain Model matching has been neglected for the last decade. Not many have tried to deal with this interesting and practical problem. The project is based on the work of J.J. Rodriguez and J.K. Aggarwal in 1990. We strongly recommend reading this work in order to have further background of this project. Moreover, this project improves the said work and deals with more realistic problems and questions aroused by it. The project contains solutions found empirically and thus opens a way for further study in improvements of the model and automation of the methods used.
The previous work done lacks more realistic features:
- Rotation and Translation – A 3-dimentional rotation and translation should be considered.
- Sources of DTM – DTM from different sources should be compared.
- Resolution – DTM of different resolution should be matched.
The following algorithm will deal with these issues.
In order to achieve rotation and translation matches we require an invariant features to be present in both images. The invariant features we chose are the curvature vs. arch length of the images zero set.
We retrieve the zero set curvature thus:
First we pass the image through a Laplacian of Gaussian (LoG) filter. As the name implies it is simply a Gaussian filter through a Laplacian operator. This gives us a cliff face edge detector that uses an adaptive threshold (retrieved via a Canny filter) to extrapolate the key zero set of the image.
Figure 1 – a. Elevation map. b. Cliff map
After obtaining the zero set we translate its lines into the form of a chain code, each line is represented by a start point and the path it takes on the image grid. (North, West, North West, etc…). This is done in order to simplify the retrieval of the curvature vs. arch length.
We interpolate the lines so that each directional move represents approximately an equal distance.
Retrieving the curvature vs. arch length requires us to pass the interpolated chain code through a derivative of Guassian (DoG) filter. This gives us a smoothed curvature vs. Arch length. We are only interested in the key feature on the curvature versus arch length to make our comparison test later. Thus, we retrieve from the graph its local extrema points since they represent the key feature on the graph.
After receiving two sets of point vectors we choose preliminary matching possibilities for each point by examining the point’s curvature area on one image and comparing it to all the other curvature Areas around the second image’s extrema points.
Now we have a series of points of one image and possible matches to the points on a different image. Thus, the problem is minimized so that the following Random Sample Consensus (RANSAC) algorithm can retrieve a rotation and translation that transform one set of point to the other, which is the same rotation and translation the moves one image to the other.
RANSAC algorithm works as following:
It selects 4 random extrema points of one image and compares it to the matching point of the second image.
The comparison is done using a function that gives a closed form solution of absolute orientation using orthonormal matrices.
The orientation that is given is tested on all the extrema points. The distance between the tested points to the other points is calculated .The number of distances that are under an empirical threshold are counted and saved this is done L iteration.
The orientation that gives use the largest number of matches under the threshold is selected from the L iteration and assumed to be the correct orientation.
Figure 2 – Model matching
All of the methods were implanted in MATLAB, which is the most powerful tool for image processing and provides the most suitable operations for achieving our goal.
The method described in the work of J.J. Rodriguez and J.K. Aggarwal is improved here to conclude 3-D orientation and position and adapted to digital terrain model with different resolution. The results of the project described in the report show that indeed these improvements hold.
This project does show that there is interest in further research of this area and there is vast space for further improvements. This project is an introduction to this method and the applications and results do encourage further exploration.
We would like to thank Avishai Adler for his advice, guidance and great support in this project. Also, we would like to thank all the personnel of the Vision and Image Science Laboratory, Electric Engineering Department, IIT. And toOllendorf Minerva Center for supporting the project.