FusionNet: An Unsupervised Convolutional Variational Network for Hyperspectral and Multispectral Image Fusion

Due to hardware limitations of the imaging sensors, it is challenging to acquire images of high resolution in both spatial and spectral domains. Fusing a low-resolution hyperspectral image (LR-HSI) and a high-resolution multispectral image (HR-MSI) to obtain an HR-HSI in an unsupervised manner has d...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on image processing Ročník 29; s. 7565 - 7577
Hlavní autoři: Wang, Zhengjue, Chen, Bo, Lu, Ruiying, Zhang, Hao, Liu, Hongwei, Varshney, Pramod K.
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York IEEE 2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:1057-7149, 1941-0042
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Due to hardware limitations of the imaging sensors, it is challenging to acquire images of high resolution in both spatial and spectral domains. Fusing a low-resolution hyperspectral image (LR-HSI) and a high-resolution multispectral image (HR-MSI) to obtain an HR-HSI in an unsupervised manner has drawn considerable attention. Though effective, most existing fusion methods are limited due to the use of linear parametric modeling for the spectral mixture process, and even the deep learning-based methods only focus on deterministic fully-connected networks without exploiting the spatial correlation and local spectral structures of the images. In this paper, we propose a novel variational probabilistic autoencoder framework implemented by convolutional neural networks, in order to fuse the spatial and spectral information contained in the LR-HSI and HR-MSI, called FusionNet. The FusionNet consists of a spectral generative network, a spatial-dependent prior network, and a spatial-spectral variational inference network, which are jointly optimized in an unsupervised manner, leading to an end-to-end fusion system. Further, for fast adaptation to different observation scenes, we give a meta-learning explanation to the fusion problem, and combine the FusionNet with meta-learning in a synergistic manner. Effectiveness and efficiency of the proposed method are evaluated based on several publicly available datasets, demonstrating that the proposed FusionNet outperforms the state-of-the-art fusion methods.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2020.3004261