Multisensensor Multitemporal Data Fusion Using Wavelet Transform
Interest in data fusion, for remote-sensing applications, continues to grow due to the increasing importance of obtaining data in high resolution both spatially and temporally. Applications that will benefit from data fusion include ecosystem disturbance and recovery assessment, ecological forecasting, and others. This paper introduces a novel spatiotemporal fusion approach, the wavelet-based Spatiotemporal Adaptive Data Fusion Model (WSAD-FM). This new technique is motivated by the popular STARFM tool, which utilizes lower-resolution MODIS imagery to supplement Landsat scenes using a linear model. The novelty of WSAD-FM is twofold. First, unlike STARFM, this technique does not predict an entire new image in one linear step, but instead decomposes input images into separate "approximation" and "detail" parts. The different portions are fed into a prediction model that limits the effects of linear interpolation among images. Low-spatial-frequency components are predicted by a weighted mixture of MODIS images and low-spatial-frequency components of Landsat images that are neighbors in the temporal domain. Meanwhile, high-spatialfrequency components are predicted by a weighted average of high-spatial-frequency components of Landsat images alone. The second novelty is that the method has demonstrated good performance using only one input Landsat image and a pair of MODIS images. The technique has been tested using several Landsat and MODIS images for a study area from Central North Carolina (WRS-2 path/row 16/35 in Landsat and H/V11/5 in MODIS), acquired in 2001. NDVI images that were calculated from the study area were used as input to the algorithm. The technique was tested experimentally by predicting existing Landsat images, and we obtained R2 values in the range 0.70 to 0.92 for estimated Landsat images in the red band, and 0.62 to 0.89 for estimated NDVI images.