AN RGB-D DATA PROCESSING FRAMEWORK BASED ON ENVIRONMENT CONSTRAINTS FOR MAPPING INDOOR ENVIRONMENTS
The adoption of RGB and depth (RGB-D) sensors for surveying applications (i.e., building information modeling [BIM], indoor navigation, and three-dimensional [3D] models) to replace expensive and time-consuming methods (e.g., stereo cameras, laser scanners) has recently attracted great attention. Due to the distinctive structure and scalability of indoor environments, the depth quality produced from RGB-D cameras and the simultaneous localization and mapping (SLAM) system responsible for the cameras pose estimation are substantial problems with existing RGB-D mapping systems. This study introduces a new RGB-D data processing framework that adopts two-dimensional and 3D features from RGB and depth images. To overcome the self-repetitive structure of indoor environments, the proposed framework uses novel description functions for both line and plane features extracted from RGB and depth images for further matching between successive RGB-D frame features. Also, the framework estimates the camera pose by minimizing the combined geometric distance of both two-dimensional and 3D features. Using the previously known structure of the indoor environment, the framework leverages the structural constraints to enhance 3D model precision. The framework also adopts a graph-based optimization technique to distribute the closure error to the graphs nodes and edges when a loop closure is detected. The visual RGB-D SLAM system and the default sensor tracking system (SensorFusion) were used to assess the performance of the proposed framework. The results show that the proposed framework can achieve significant improvement in 3D model accuracy.