FEATURE MATCHING ENHANCEMENT OF UAV IMAGES USING GEOMETRIC CONSTRAINTS
Preliminary matching of image features is based on the distance between their descriptors. Matches are further filtered using RANSAC, or a similar method that fits the matches to a model; usually the fundamental matrix and rejects matches not belonging to that model. There are a few issues with this scheme. First, mismatches are no longer considered after RANSAC rejection. Second, RANSAC might fail to detect an accurate model if the number of outliers is significant. Third, a fundamental matrix model could be degenerate even if the matches are all inliers. To address these issues, a new method is proposed that relies on the prior knowledge of the images’ geometry, which can be obtained from the orientation sensors or a set of initial matches. Using a set of initial matches, a fundamental matrix and a global homography can be estimated. These two entities are then used with a detect-and-match strategy to gain more accurate matches. Features are detected in one image, then the locations of their correspondences in the other image are predicted using the epipolar constraints and the global homography. The feature correspondences are then corrected with template matching. Since global homography is only valid with a plane-to-plane mapping, discrepancy vectors are introduced to represent an alternative to local homographies. The method was tested on Unmanned Aerial Vehicle (UAV) images, where the images are usually taken successively, and differences in scale and orientation are not an issue. The method promises to find a well-distributed set of matches over the scene structure, especially with scenes of multiple depths. Furthermore; the number of outliers is reduced, encouraging to use a least square adjustment instead of RANSAC, to fit a non-degenerate model.