AIRBORNE LIDAR POINT CLOUD CLASSIFICATION FUSION WITH DIM POINT CLOUD
Airborne Light Detection And Ranging (LiDAR) point clouds and images data fusion have been widely studied. However, with recent developments in photogrammetric technology, images can now provide dense image matching (DIM) point clouds. To make use of such DIM points, a sample selection framework is introduced. That is, first, the geometric features of LiDAR points and DIM points are extracted. Each feature per point is considered a sample. Then we extend the binary TrAdaboost classifier into a multi-class one to train all the samples. The classifier automatically assigns weights to the samples in the DIM points. The useful samples are assigned large weights and consequently impact the classification results largely and vice versa. As a result, the useful samples of the DIM points are kept to improve on the LiDAR points classification performance. Because only the samples are used, the registration between the DIM points and LiDAR points is not needed. Moreover, the DIM points capturing similar classes but not the same scene as the LiDAR points can also be used. By our framework, existing aerial images can be fully utilized. For testing the generation ability, the framework is applied in a super-voxel-based classification approach by replacing the points-based features with the super-voxel-based features. In the experiments, whether DIM points at the same places as those of LiDAR are used or not, the results after fusion show that, the LiDAR points classification performance has improved. Also, the better the quality of DIM points are, the better the classification performance achieves.