LAND COVER CLASSIFICATION OF SATELLITE IMAGES USING CONTEXTUAL INFORMATION
This paper presents a method for the classification of satellite images into multiple predefined land cover classes. The proposed approach results in a fully automatic segmentation and classification of each pixel, using a small amount of training data. Therefore, semantic segmentation techniques are used, which are already successful applied to other computer vision tasks like facade recognition. We explain some simple modifications made to the method for the adaption of remote sensing data. Besides local features, the proposed method also includes contextual properties of multiple classes. Our method is flexible and can be extended for any amount of channels and combinations of those. Furthermore, it is possible to adapt the approach to several scenarios, different image scales, or other earth observation applications, using spatially resolved data. However, the focus of the current work is on high resolution satellite images of urban areas. Experiments on a QuickBird-image and LiDAR data of the city of Rostock show the flexibility of the method. A significant better accuracy can be achieved using contextual features.