ROAD SEGMENTATION ON LOW RESOLUTION LIDAR POINT CLOUDS FOR AUTONOMOUS VEHICLES

Gigli, L.; Kiran, B. R.; Paul, T.; Serna, A.; Vemuri, N.; Marcotegui, B.; Velasco-Forero, S.

Point cloud datasets for perception tasks in the context of autonomous driving often rely on high resolution 64-layer Light Detection and Ranging (LIDAR) scanners. They are expensive to deploy on real-world autonomous driving sensor architectures which usually employ 16/32 layer LIDARs. We evaluate the effect of subsampling image based representations of dense point clouds on the accuracy of the road segmentation task. In our experiments the low resolution 16/32 layer LIDAR point clouds are simulated by subsampling the original 64 layer data, for subsequent transformation in to a feature map in the Bird-Eye-View(BEV) and Spherical-View (SV) representations of the point cloud. We introduce the usage of the local normal vector with the LIDAR’s spherical coordinates as an input channel to existing LoDNN architectures. We demonstrate that this local normal feature in conjunction with classical features not only improves performance for binary road segmentation on full resolution point clouds, but it also reduces the negative impact on the accuracy when subsampling dense point clouds as compared to the usage of classical features alone. We assess our method with several experiments on two datasets: KITTI Road-segmentation benchmark and the recently released Semantic KITTI dataset.

Zitieren

Zitierform:

Gigli, L. / Kiran, B. R. / Paul, T. / et al: ROAD SEGMENTATION ON LOW RESOLUTION LIDAR POINT CLOUDS FOR AUTONOMOUS VEHICLES. 2020. Copernicus Publications.

Zugriffsstatistik

Gesamt:
Volltextzugriffe:
Metadatenansicht:
12 Monate:
Volltextzugriffe:
Metadatenansicht:

Grafik öffnen

Rechte

Rechteinhaber: L. Gigli et al.

Nutzung und Vervielfältigung:

Export