TOWARDS AN ACCURATE LOW-COST STEREO-BASED NAVIGATION OF UNMANNED PLATFORMS IN GNSS-DENIED AREAS
While lightweight stereo vision sensors provide detailed and high-resolution information that allows robust and accurate localization, the computation demands required for such process is doubled compared to monocular sensors. In this paper, an alternative model for pose estimation of stereo sensors is introduced which provides an efficient and precise framework for investigating system configurations and maximize pose accuracies. Using the proposed formulation, we examine the parameters that affect accurate pose estimation and their magnitudes and show that for standard operational altitudes of ∼50 m, a five-fold improvement in localization is reached, from ∼0.4–0.5 m with a single sensor to less than 0.1 m by taking advantage of the extended field of view from both cameras. Furthermore, such improvement is reached using cameras with reduced sensor size which are more affordable. Hence, a dual-camera setup improves not only the pose estimation but also enables to use smaller sensors and reduce the overall system cost. Our analysis shows that even a slight modification in camera directions improves the positional accuracy further and yield attitude angle as accurate as ±6’ (compared to ±20’). The proposed pose estimation method relieves computational demands of traditional bundle adjustment processes and is easily integrated with other inertial sensors.