Feature-Based Image Extrapolation

This project deals with image extrapolation. The aim of image extrapolation is to extend the image beyond its original domain in a way that is visually consistent with the original image.

Abstract

This project deals with image extrapolation. The aim of image extrapolation is to extend the image beyond its original domain in a way that is visually consistent with the original image. The project suggests an approach that is based on SIFT features which are invariant to similarity transforms (translation, scaling and rotation), thus enables to match between similar objects that can differ in scale or rotation.

The proposed algorithm consists of three steps. In the first step the SIFT algorithm creates a data base of the image’s features. In the second step the features’ data base is used to choose the most adequate completion area for each missing patch. Finally, RANSAC algorithm computes the affine transformations needed in order to complete the missing areas.

The experimental results show that the proposed algorithm achieves good results when used in texture synthesis tasks or when applied to images with very high self-similarity.

 

Problem Overview

The aim of image extrapolation is to extend the image beyond its original domain in a way that is visually consistent with the original image.

1

 

Proposed Solution

1.Use SIFT to generate a set of features – F

2.Divide image boundary into frames of various sizes

3.For each Frame:

  • Find , a subset of F that best matches the features from the frame ( )
  • Use RANSAC to find a transformation T from to
  • Use T to Complete the area adjacent to the frame

3

 

Dense SIFT

  • In some cases, the basic SIFT algorithm does not produce enough feature points
  • Dense SIFT allows us to place features on a uniformly spaced grid:
    • We created 3 scales using Gaussian filters
    • For each scale a different grid was chosen
    • For each feature point in each scale, the SIFT signature was calculated

4

 

KD-Tree

  • VLFeat’s matching algorithm find at most 1 match for each feature point.
  • KD-Tree allows us to build a database from the features’ signatures and quickly estimate the k-nearest-neighbors of each signature.

5

 

Results

  • Full extrapolation using unlimited search area
  • Very large frame size
  • Other search methods produced similar results

6

 

Conclusions

  • Moving window with Dense SIFT produces best results
  • Good results when identical or very similar objects are found in the image as well as synthetic texture images
  • Unable to extrapolate more complicated images
  • Can be used as an add-on to different extrapolation method in order to improve results