SPATIAL RESOLUTION ENHANCEMENT OF LAND COVER MAPPING USING DEEP CONVOLUTIONAL NETS
Multispectral satellite imagery is the primary data source for monitoring land cover change and characterizing land cover at the global scale. However, the accuracy of land cover classification is often constrained by the spatial and temporal resolutions of the acquired satellite images. This paper proposes a novel spatiotemporal fusion method based on deep convolutional neural networks under the application background of massive remote sensing data, as well as the large spatial resolution gaps between MODIS and Sentinel images. The training was taken on the public SEN12MS dataset, while the validation and testing were conducted using ground truth data from the 2020 IEEE GRSS data fusion contest. As a result of data fusion, the synthesized land cover map was more accurate than the corresponding MODIS-derived land cover map, with an enhanced spatial resolution of 10 meters. The ensemble approach can be implemented for improving data quality when generating a global land cover product from coarse satellite imageries.