RGB-D Indoor Plane-based 3D-Modeling using Autonomous Robot

Mostofi, N.; Moussa, A.; Elhabiby, M.; El-Sheimy, N.

3D model of indoor environments provide rich information that can facilitate the disambiguation of different places and increases the familiarization process to any indoor environment for the remote users. In this research work, we describe a system for visual odometry and 3D modeling using information from RGB-D sensor (Camera). The visual odometry method estimates the relative pose of the consecutive RGB-D frames through feature extraction and matching techniques. The pose estimated by visual odometry algorithm is then refined with iterative closest point (ICP) method. The switching technique between ICP and visual odometry in case of no visible features suppresses inconsistency in the final developed map. Finally, we add the loop closure to remove the deviation between first and last frames. In order to have a semantic meaning out of 3D models, the planar patches are segmented from RGB-D point clouds data using region growing technique followed by convex hull method to assign boundaries to the extracted patches. In order to build a final semantic 3D model, the segmented patches are merged using relative pose information obtained from the first step.



Mostofi, N. / Moussa, A. / Elhabiby, M. / et al: RGB-D Indoor Plane-based 3D-Modeling using Autonomous Robot. 2014. Copernicus Publications.


12 Monate:

Grafik öffnen


Rechteinhaber: N. Mostofi et al.

Nutzung und Vervielfältigung: