COMPARATIVE EVALUATION OF DERIVED IMAGE AND LIDAR POINT CLOUDS FROM UAV-BASED MOBILE MAPPING SYSTEMS
Unmanned aerial vehicles (UAVs) have been widely used for 3D reconstruction/modelling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such mapping applications, camera and LiDAR are the two most commonly used sensors. Mapping with imagery-based approaches is considered to be an economical and effective option and is often conducted using Structure from Motion (SfM) techniques where point clouds and orthophotos are generated. In addition to UAV photogrammetry, point clouds of the area of interest can also be directly derived from LiDAR sensors onboard UAVs equipped with global navigation satellite systems/inertial navigation systems (GNSS/INS). In this study, a custom-built UAV-based mobile mapping system is used to simultaneously collect imagery and LiDAR data. Derived LiDAR and image-based point clouds are investigated and compared in terms of their absolute and relative accuracy. Furthermore, stability of the system calibration parameters for the camera and LiDAR sensors are studied using temporal datasets. The results show that while LiDAR point clouds demonstrate a high absolute accuracy over time, image-based point clouds are not as accurate as LiDAR due to instability of the camera interior orientation parameters.