GENERATING 3D CITY MODELS BASED ON THE SEMANTIC SEGMENTATION OF LIDAR DATA USING CONVOLUTIONAL NEURAL NETWORKS
Virtual city models are important for many applications such as urban planning, virtual and augmented reality, disaster management, and gaming. Urban features such as buildings, roads, and trees are essential components of these models and are subject to frequent change and alteration. It is laborious to manually build and update virtual city models, due to a large number of instances and temporal changes on such features. The increase of publicly available spatial data provides an important source for pipelines that automate virtual city model generation. The large quantity of data also opens an opportunity to use Deep Learning (DL) as a technique that minimizes the need for expert domain knowledge. In addition, many Deep Learning models calculations can be parallelized on modern hardware such as graphical processing units, which reduces the computation time substantially.We explore the opportunity of using publicly available data in computing multiple thematic data layers from Digital Surface Models (DSMs) using an automatic pipeline that is powered by a semantic segmentation network. To evaluate this design, we implement our pipeline using multiple Convolutional Neural Networks (CNN) with an encoder-decoder architecture. We produce a variety of two and three-dimensional thematic data. We focus our evaluation on the pipeline’s ability to produce accurate building footprints. In our experiments we vary the depths, the number of input channels and data resolutions of the evaluated networks. Our experiments process public data that is provided by New York City.