Galar, M.; Sesma, R.; Ayala, C.; Aranda, C.

Obtaining Sentinel-2 imagery of higher spatial resolution than the native bands while ensuring that output imagery preserves the original radiometry has become a key issue since the deployment of Sentinel-2 satellites. Several studies have been carried out on the upsampling of 20 m and 60 m Sentinel-2 bands to 10 meters resolution taking advantage of 10 m bands. However, how to super-resolve 10 m bands to higher resolutions is still an open problem. Recently, deep learning-based techniques has become a de facto standard for single-image super-resolution. The problem is that neural network learning for super-resolution requires image pairs at both the original resolution (10 m in Sentinel-2) and the target resolution (e.g., 5 m or 2.5 m). Since there is no way to obtain higher resolution images for Sentinel-2, we propose to consider images from others sensors having the greatest similarity in terms of spectral bands, which will be appropriately pre-processed. These images, together with Sentinel-2 images, will form our training set. We carry out several experiments using state-of-the-art Convolutional Neural Networks for single-image super-resolution showing that this methodology is a first step toward greater spatial resolution of Sentinel-2 images.



Galar, M. / Sesma, R. / Ayala, C. / et al: SUPER-RESOLUTION FOR SENTINEL-2 IMAGES. 2019. Copernicus Publications.


Rechteinhaber: M. Galar et al.

Nutzung und Vervielfältigung: