SEMANTICALLY ENRICHED HIGH RESOLUTION LOD 3 BUILDING MODEL GENERATION
This paper reports about an effort to generate LoD3 models of buildings semi-automatically, with the highest possible level of automation. It is work in progress. We use multi-sensor data like aerial images from a 5-head camera with a GSD of 10 cm, UAV images, and aerial and mobile LiDAR point clouds. We distinguish two cases: LoD2 models are available and they are not. We apply Multi-Photo Geometrically Constrained Least Squares Matching for different kind of point measurements. The regularity of many building façades in Singapore leads us to the idea to generalize the measurement procedure towards using measurement macros (geometrical primitives, i.e. windows, doors, etc.) and combine reality-based with procedural modelling. In parallel we try to model these façade elements from LiDAR point cloud data. In another research line we do building detection by a novel approach to land-cover classification, incorporating features of the façades to improve the classification accuracy. To generate the semantic labels of the façades, we developed a spatially unrelated mean-shift clustering method to yield structurally confined segments. It is the characteristic of automated and even semi-automated procedures that the results need some amount of editing. We therefore work on interactive post-editing approaches on CityGML building models containing semantic information of each surface. Maintaining the semantic information throughout the editing process is essential but often lack the support from current tools. Accordingly, we implement a method to synchronize CityGML models. Overall this project consists of a great number of different algorithmic components, which can only be coarsely explained in this paper.