DEEP LEARNING-BASED RECONSTRUCTION OF SPATIOTEMPORALLY FUSED SATELLITE IMAGES FOR SMART AGRICULTURE APPLICATIONS IN A HETEROGENEOUS AGRICULTURAL REGION
Remote sensing offers spatially explicit and temporally continuous observational data of various land surface parameters such as vegetation index, land surface temperature, soil moisture, leaf area index, and evapotranspiration, which can be widely leveraged for various applications at different scales and contexts. One of the main applications is agricultural monitoring, where a smart system based on precision agriculture requires a set of satellite images with a high resolution, both in time and space to capture the phenological stages and fine spatial details, especially in landscapes with various spatial heterogeneity and temporal variation. These requirements sometimes cannot be provided by a single sensor due to the trade-off required between spatial and temporal resolutions and/or the influence of cloud cover. The data availability of new generation multispectral sensors of Landsat-8 (L8) and Sentinel-2 (S2) satellites offers unprecedented options for such applications. Given this, the current study aims to display how the synergistic use of these optical sensors can efficiently support such an application. Herein, this study proposes a deep learning spatiotemporal data fusion method to fill the need for predicting a dense time series of vegetation index with fine spatial resolution. The results show that the developed method creates more accurate fused NDVI time-series data that were able to derive phenological stages and characteristics in single-crop fields, while keeps more spatial details in such a heterogeneous landscape.