SEMANTIC SEGMENTATION OF MANMADE LANDSCAPE STRUCTURES IN DIGITAL TERRAIN MODELS
We explore the use of semantic segmentation in Digital Terrain Models (DTMS) for detecting manmade landscape structures in archaeological sites. DTM data are stored and processed as large matrices of depth 1 as opposed to depth 3 in RGB images. The matrices usually contain continuous real-valued information upper bound of which is not fixed, such as distance or height from a reference surface. This is different from RGB images that contain integer values in a fixed range of 0 to 255. Additionally, RGB images are usually stored in smaller multidimensional matrices, and are more suitable as inputs for a neural network while the large DTMs are necessary to be split into smaller sub-matrices to be used by neural networks. Thus, while the spatial information of pixels in RGB images are important only locally within a single image, for DTM data, they are important locally, within a single sub-matrix processed for neural network, and also globally, in relation to the neighboring sub-matrices. To cope with the two differences, we apply min-max normalization to each input matrix fed to the neural network, and use a slightly modified version of DeepLabv3+ model for semantic segmentation. We show that with the architecture change, and the preprocessing, better results are achieved.